Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

What is an adequate amount of commentary on code?

Options
24

Comments

  • Registered Users Posts: 1,922 ✭✭✭fergalr


    GreeBo wrote: »
    Wouldnt a test thats difficult to maintain imply that the code its testing is also difficult to maintain....and having a test is thus a lovely warm fuzzy blanket?

    Not necessarily. I can imagine valid situations where your test code is complex (and maybe even necessarily so) although the code its testing is simple.

    Maybe we are talking about the tests for your pseudo-random number generator.


    Or maybe someone has decided they want to test your code is properly generating the mandelbrot set and come up with a very complex test for some fairly simple code.


    But anyway, lets say we accept your premise - what's the relevance really?

    There's often a benefit to having tests, sure; that's not in dispute; I'm just saying there's a complexity cost too, and the two have to be compared.


  • Registered Users Posts: 27,073 ✭✭✭✭GreeBo


    fergalr wrote: »
    But anyway, lets say we accept your premise - what's the relevance really?

    There's often a benefit to having tests, sure; that's not in dispute; I'm just saying there's a complexity cost too, and the two have to be compared.

    The relevance is that if I have complex code that is being changed, I want a confidence level that nothing unexpected/unnoticed has been broken.

    Unit tests give you this confidence level. I think there is always a benefit to having unit tests, sure there may be a cost, but there is a cost with any code you write, error handling for example, logging another, but we accept that cost because the benefits outweigh it.


  • Registered Users Posts: 870 ✭✭✭moycullen14


    Here's a good comment.

    . . .
    hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
    hashOut.length = SSL_SHA1_DIGEST_LEN;
    if ((err = SSLFreeBuffer(&hashCtx)) != 0)
        goto fail;
    if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
    [COLOR="Red"]    goto fail;  /* MISTAKE! THIS LINE SHOULD NOT BE HERE */
    [/COLOR]if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;
    
    err = sslRawVerify(...);
    

    From the recent iOS security upgrade

    http://nakedsecurity.sophos.com/2014/02/24/anatomy-of-a-goto-fail-apples-ssl-bug-explained-plus-an-unofficial-patch/


  • Registered Users Posts: 40,038 ✭✭✭✭Sparks


    GreeBo wrote: »
    I think there is always a benefit to having unit tests
    Without disagreeing or agreeing and in all seriousness, where's the study that proves that?
    (See here for the motivation to the question).


  • Registered Users Posts: 40,038 ✭✭✭✭Sparks


    Here's a good comment.
    ...
    From the recent iOS security upgrade

    Is it just me, or wouldn't even lint have caught that?


  • Advertisement
  • Registered Users Posts: 27,073 ✭✭✭✭GreeBo


    Sparks wrote: »
    Without disagreeing or agreeing and in all seriousness, where's the study that proves that?
    (See here for the motivation to the question).

    Probably the same place as the study that disproves it? :)

    However, I think of it a lot like insurance, it doesnt have to be useful every day/ time to be worth it.
    Plenty of times I've confidently made changes only to see the tests fail and go "oh yeah....that!"

    Like a helmet or seatbelt, once is enough for me.


  • Registered Users Posts: 40,038 ✭✭✭✭Sparks


    GreeBo wrote: »
    Probably the same place as the study that disproves it? :)
    Either study would be good, but that's kindof sidestepping the point by a country mile...
    However, I think of it a lot like insurance, it doesnt have to be useful every day/ time to be worth it.
    And like insurance, sometimes there's absolutely no point in having it (eg. extended warranties which don't exceed the EU minimum legal basics, insuring things you can afford to replace, etc, etc).

    But unlike insurance, this is something we really ought to prove. Just as a basic industry practice, y'know? For example, what if the study shows that there is a benefit, but it's only worth it once $METRIC exceeds $THRESHOLDVALUE ?


  • Subscribers Posts: 4,075 ✭✭✭IRLConor


    Sparks wrote: »
    Without disagreeing or agreeing and in all seriousness, where's the study that proves that?
    (See here for the motivation to the question).

    "Making Software, What Really Works and Why We Believe It" has a nice large collection of references to good studies around software development practices.

    It doesn't have studies specifically on unit testing but it does include a systematic review on TDD which is quite interesting.


  • Registered Users Posts: 8,219 ✭✭✭Calina


    Ooh there's a kindle edition of that. I might buy it and add it to the other books I have to read....


  • Subscribers Posts: 4,075 ✭✭✭IRLConor


    Calina wrote: »
    Ooh there's a kindle edition of that. I might buy it and add it to the other books I have to read....

    Yup, I have the Kindle edition. It's the only way I was able to check it for testing references since I'm not currently co-located with my bookshelf. :)


  • Advertisement
  • Registered Users Posts: 40,038 ✭✭✭✭Sparks


    IRLConor wrote: »
    "Making Software, What Really Works and Why We Believe It" has a nice large collection of references to good studies around software development practices.
    And is my current book-in-progress on the work reading list :)
    (Though ooo, kindle - that might help speed up my progress on it...)
    It doesn't have studies specifically on unit testing but it does include a systematic review on TDD which is quite interesting.
    Yup, but the unit testing thing is more general than TDD, and it'd be nice to see if someone ever studied - for example - whether the benefits of unit testing don't kick in until after the initial development (Micheal Feather's book on legacy systems describing the kind of thing I'm thinking about here), whether or not code review was better than unit tests in the beginning (there's some work on code review, but not that specific question), or whether you get the benefits right from day one.

    That's kindof the thing really - we all know that it's so, the same way people always knew a whole bunch of things that later scientific studies showed were completely bogus. So actual evidence is downright necessary if we want to be able to say our profession is just another kind of engineering (in the generic applying-science-and-economics-to-build-the-world-around-us sense of the word).


  • Registered Users Posts: 8,219 ✭✭✭Calina


    now that I am actually thinking about this, I wonder how much of this is driven by the IT process consultancy industry.


  • Subscribers Posts: 4,075 ✭✭✭IRLConor


    Calina wrote: »
    now that I am actually thinking about this, I wonder how much of this is driven by the IT process consultancy industry.

    Probably less than the amount that's driven by hype from loudmouth self-publicists.


  • Registered Users Posts: 27,073 ✭✭✭✭GreeBo


    Sparks wrote: »
    Either study would be good, but that's kindof sidestepping the point by a country mile...

    tbf, its sidestepping it just as much as you are. If there is no empirical evidence either way, then the "evidence" argument is moot, no?

    I can't prove that unit tests are always beneficial any more than you can prove they aren't.
    Sparks wrote: »
    And like insurance, sometimes there's absolutely no point in having it (eg. extended warranties which don't exceed the EU minimum legal basics, insuring things you can afford to replace, etc, etc).

    Whats the equivalent to free insurance in software development though?
    I think we can all agree that the earlier you find a bug the cheaper it is to fix? Can any industry really afford (or would want?) to fix bugs that are not found during development/construction?
    Sparks wrote: »
    But unlike insurance, this is something we really ought to prove. Just as a basic industry practice, y'know? For example, what if the study shows that there is a benefit, but it's only worth it once $METRIC exceeds $THRESHOLDVALUE ?

    I dont believe you can prove it any more than the source control point. Whats the baseline; your company, mine, some other, Google?

    For me the bottom line is that if you are paying people to write code that you want to sell/make money from, then you want them doing that as much as possible. Taking the hit of having them write unit tests means that any bugs introduced are typically fixed by the creator at a much cheaper cost than being found by another person and potentially fixed by yet another person.

    A test is written once and will continue "proving" the code still works as intended until the code is modified to change that expected behaviour. Thats a lot of free regression testing that you wont get with manual testing/code reviews.


  • Registered Users Posts: 40,038 ✭✭✭✭Sparks


    GreeBo wrote: »
    tbf, its sidestepping it just as much as you are. If there is no empirical evidence either way, then the "evidence" argument is moot, no?
    I don't see how -- the argument is that we should collect evidence to find out. It's not an argument for or against the method, just that we should find out first.
    I can't prove that unit tests are always beneficial any more than you can prove they aren't.
    That's not the argument I'm making. I'm specifically saying I neither disagree nor agree with you. I literally don't know either way. I can see good arguments for both sides of that and some for other sides too (yes, we're up to N sides here, bear with me).


    Whats the equivalent to free insurance in software development though?
    There's no such thing either in the insurance or the software world (unit tests aren't free). My analogy was trying to say (badly) that with insurance you know what it costs and you have guidelines based ultimately on maths that tell you if you need insurance or not. We don't seem to have those guidelines for unit tests because we don't appear to have the underlying data.
    I think we can all agree that the earlier you find a bug the cheaper it is to fix?
    Well, yes as it happens because someone studied that and proved it.
    But there's more than one way to find bugs. And what little actual experimental evidence we have has shown that at least one of those other methods is highly effective in the right circumstances (code review). And there's nothing to say you have to choose one or the other either, nobody's studied that approach.
    A test is written once and will continue "proving" the code still works as intended until the code is modified to change that expected behaviour.
    That's a good argument.
    "So long as the test was right in the first place" is a good counterargument (and one I've hit in practice on code that's been shipping for a long time).
    And when you have good arguments for and against, it really highlights that we need the problem to be studied more. That's my point here. We don't have those studies yet and we ought to have them.


  • Registered Users Posts: 27,073 ✭✭✭✭GreeBo


    Sparks wrote: »
    I don't see how -- the argument is that we should collect evidence to find out. It's not an argument for or against the method, just that we should find out first.

    That's not the argument I'm making. I'm specifically saying I neither disagree nor agree with you. I literally don't know either way. I can see good arguments for both sides of that and some for other sides too (yes, we're up to N sides here, bear with me).




    There's no such thing either in the insurance or the software world (unit tests aren't free). My analogy was trying to say (badly) that with insurance you know what it costs and you have guidelines based ultimately on maths that tell you if you need insurance or not. We don't seem to have those guidelines for unit tests because we don't appear to have the underlying data.


    Well, yes as it happens because someone studied that and proved it.
    But there's more than one way to find bugs. And what little actual experimental evidence we have has shown that at least one of those other methods is highly effective in the right circumstances (code review). And there's nothing to say you have to choose one or the other either, nobody's studied that approach.


    That's a good argument.
    "So long as the test was right in the first place" is a good counterargument (and one I've hit in practice on code that's been shipping for a long time).
    And when you have good arguments for and against, it really highlights that we need the problem to be studied more. That's my point here. We don't have those studies yet and we ought to have them.

    And the arguement makes sense, but and there is always a but, I'm going to continue demanding unit tests until someone proves to me that they cost more than they are worth.

    Code reviews should help you to have correct tests :)


  • Registered Users Posts: 40,038 ✭✭✭✭Sparks


    GreeBo wrote: »
    And the arguement makes sense, but and there is always a but, I'm going to continue demanding unit tests until someone proves to me that they cost more than they are worth.
    That's as valid as anything else given what we know at the moment.
    Code reviews should help you to have correct tests :)
    Yes; but again, the counterargument is that code reviews and unit tests aren't free and if code reviews catch over 90% of all bugs before you ever even run the compiler the first time, would we be better off spending time on those and not on unit tests?
    (And no, we don't know the answer. Could be either, could be both).


  • Registered Users Posts: 870 ✭✭✭moycullen14


    Sparks wrote: »
    That's as valid as anything else given what we know at the moment.

    Yes; but again, the counterargument is that code reviews and unit tests aren't free and if code reviews catch over 90% of all bugs before you ever even run the compiler the first time, would we be better off spending time on those and not on unit tests?
    (And no, we don't know the answer. Could be either, could be both).

    Am I right in saying that code reviews is the one thing that has been empirically proven to improve quality/lower cost in development? You rarely come across it, though. I guess because if (when) it is badly managed, it causes a lot of problems.


  • Registered Users Posts: 8,219 ✭✭✭Calina


    Am I right in saying that code reviews is the one thing that has been empirically proven to improve quality/lower cost in development? You rarely come across it, though. I guess because if (when) it is badly managed, it causes a lot of problems.

    Was mandatory at my last job. Nothing went to UAT without code review. I suppose this is why I get stunned when I hear that it's not done in a lot of places.


  • Registered Users Posts: 870 ✭✭✭moycullen14


    Calina wrote: »
    Was mandatory at my last job. Nothing went to UAT without code review. I suppose this is why I get stunned when I hear that it's not done in a lot of places.

    It's just that in a long(ish) career here and in the UK, I have rarely come across it being done systematically. One of the few places I saw it had terrible problems
    with it. Temper tantrums, dysfunctional behavior, the lot. It was down to a lack of coding standards, mainly, so everything was based on prejudice.

    Maybe with API & Framework based systems, it should be less contentious.

    One thing for sure, most environments would benefit hugely from code walk-throughs - just as long as I don't have to do it!


  • Advertisement
  • Registered Users Posts: 8,219 ✭✭✭Calina


    It's just that in a long(ish) career here and in the UK, I have rarely come across it being done systematically. One of the few places I saw it had terrible problems
    with it. Temper tantrums, dysfunctional behavior, the lot. It was down to a lack of coding standards, mainly, so everything was based on prejudice.

    Maybe with API & Framework based systems, it should be less contentious.

    One thing for sure, most environments would benefit hugely from code walk-throughs - just as long as I don't have to do it!

    I worked in a highly specialised environment which almost certainly contributed to the culture, plus it had been the culture in the place pretty much since day one. If it's all you've ever known, maybe you never take it personally. I suspect if you bring in code review, people start to feel defensive about errors being found and do take it personally.

    That being said, I can't see how you'd get an effective code review process working if there are no clear local standards; I guess the next problem is if you try to introduce them, that causes arguments as well.

    Interestingly, while there is a lot of discussion about commentary in college courses I've attended since I started programming, the question of coding standards isn't something that has come up so much.


  • Registered Users Posts: 40,038 ✭✭✭✭Sparks


    Am I right in saying that code reviews is the one thing that has been empirically proven to improve quality/lower cost in development?
    It's more accurate to say that it's one of the only methods that currently has been studied properly -- but that doesn't mean it's the only thing that works nor that it doesn't come with caveats.
    You rarely come across it, though. I guess because if (when) it is badly managed, it causes a lot of problems.
    That's one of the caveats :D And I have seen some people who were obnoxiously unacceptably awful at it (to the point where whole teams quit because of them). Mind you, same's true for almost any method you encounter. Personally, I think it has a lot more to do with it's IBM-esque image, people seem to think of it the way they think of Waterfall. But if done right, it's actually highly effective, if expensive (your effectiveness in code review after an hour's work drops off dramatically so there's only so much code you can review at one time - not a surprising result if you look at projects like the linux kernel and the sizes of patches they work with).


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    GreeBo wrote: »
    The relevance is that if I have complex code that is being changed, I want a confidence level that nothing unexpected/unnoticed has been broken.

    Unit tests give you this confidence level. I think there is always a benefit to having unit tests, sure there may be a cost, but there is a cost with any code you write, error handling for example, logging another, but we accept that cost because the benefits outweigh it.

    My point is that the benefits of having a particular test does not always outweigh the costs of having that test.

    This is obvious to me.

    But here's an example. Lets say its a fairly small 2 person project working on some math code, for a video game we are writing. You want to calculate the error of a prediction, using squared error.
    Lets say you know that code is only going to be called with positive double values in the range between -1 and 1.
    public double getErrorInPredictions(double target, double prediction) {
        double distance = target - prediction;
        return distance * distance;
    }
    


    Should you write unit tests for this code?

    Maybe we should write a test that puts in (5,7) and checks we get approximately 4 back?
    And another that checks it works with whole number values? And that it works when target is negative? And when prediction is negative? And when both are negative?
    Maybe we should write a whole battery of tests like that?

    In general, no, we shouldn't. Maybe write one unit test, but probably zero, as long as there's a functional test elsewhere that we're reasonably happy drives this code (e.g. in our game, we can see the monster that uses this code intercept the player)

    This is the same reason we don't write a comment like:
    //the distance is the difference between the target and the prediction.


    Even with simple tests, even if the test is simple, and the code being tested is simple, the tests add to the complexity of our codebase overall.

    You mention error handling.
    Its the same thing with error handling.

    Its very important to know when to leave error handling out.
    Newbie programmers do things like write error handling in every method they write, even when the method is only called by some other code they've just written - on the basis that, well, it could change in future.

    By the time you do this, you've covered your code in error handling; you can no longer see what the code actually does; and that costs you more than it benefits.


    Everything should be done only when assessing the cost and the benefit.

    And, if you assert that every test's benefit outweighs its cost - or that every piece of error handling code's benefit outweighs its cost - or the same for comments - then I'm pretty sure you're wrong.


    Again, cost benefits - if you are writing the code for the last-ditch killer-asteroid intercept mission, then you do things differently than if you are writing for a video game - obviously.


  • Registered Users Posts: 20,912 ✭✭✭✭Stark


    Very interesting points. I agree that 100% coverage is an unfeasible target for unit test coverage, there are always pieces of code where it's more hassle than it's worth to write tests around. I generally assign quite a high benefit to cost ratio for unit tests though (orders of magnitude higher than for comments), I think 80% - 90% line coverage is a good target depending on the code.

    As for TDD, I quite like it myself, there is a focus on keeping things lean (you don't write "just in case" code) Strictly speaking TDD wouldn't advocate writing spurious tests for input conditions that can never happen as your example describes. So in that sense it's quite good. And by writing tests for tricky pieces of code up front you take redeploy out of "deploy, test, rewrite, redeploy" dev cycle which more than makes up for the cost of writing the test. Especially if your container is a pain to deploy to.

    I'm generally in favour of the test pyramid as well (High number of unit tests -> Low number of system tests). It's a lot less painful to debug and fix an error when you catch it at source than when you're trying to debug something at a system level and not sure where the error's coming from. Don't think relying on a functional test like does the monster move in the right direction is a good way of verifying that the basic components of your physics engine work correctly, sounds like a recipe for lots of hair pulling.

    Edit: Reading that "Making software work" book at the moment (God bless Safari Books). Quite wittily written:
    Regardless of the reporting quality of the TDD trials, a related question is raised: “Should the textbook definition of TDD be followed in all real-life cases?” Sometimes patients get better even with a half-sized or quarter-sized pill modified for their specific work context and personal style.

    Word. My personal belief is the correct TDD dosage is somewhat less than what a typical manager might prescribe.

    The overall conclusion from the book seems to be inconclusive though.

    ⛥ ̸̱̼̞͛̀̓̈́͘#C̶̼̭͕̎̿͝R̶̦̮̜̃̓͌O̶̬͙̓͝W̸̜̥͈̐̾͐Ṋ̵̲͔̫̽̎̚͠ͅT̸͓͒͐H̵͔͠È̶̖̳̘͍͓̂W̴̢̋̈͒͛̋I̶͕͑͠T̵̻͈̜͂̇Č̵̤̟̑̾̂̽H̸̰̺̏̓ ̴̜̗̝̱̹͛́̊̒͝⛥



  • Registered Users Posts: 27,073 ✭✭✭✭GreeBo


    fergalr wrote: »
    My point is that the benefits of having a particular test does not always outweigh the costs of having that test.

    This is obvious to me.

    But here's an example. Lets say its a fairly small 2 person project working on some math code, for a video game we are writing. You want to calculate the error of a prediction, using squared error.
    Lets say you know that code is only going to be called with positive double values in the range between -1 and 1.
    public double getErrorInPredictions(double target, double prediction) {
        double distance = target - prediction;
        return distance * distance;
    }
    


    Should you write unit tests for this code?

    Maybe we should write a test that puts in (5,7) and checks we get approximately 4 back?
    And another that checks it works with whole number values? And that it works when target is negative? And when prediction is negative? And when both are negative?
    Maybe we should write a whole battery of tests like that?

    In general, no, we shouldn't. Maybe write one unit test, but probably zero, as long as there's a functional test elsewhere that we're reasonably happy drives this code (e.g. in our game, we can see the monster that uses this code intercept the player)

    This is the same reason we don't write a comment like:



    Even with simple tests, even if the test is simple, and the code being tested is simple, the tests add to the complexity of our codebase overall.

    You mention error handling.
    Its the same thing with error handling.

    Its very important to know when to leave error handling out.
    Newbie programmers do things like write error handling in every method they write, even when the method is only called by some other code they've just written - on the basis that, well, it could change in future.

    By the time you do this, you've covered your code in error handling; you can no longer see what the code actually does; and that costs you more than it benefits.


    Everything should be done only when assessing the cost and the benefit.

    And, if you assert that every test's benefit outweighs its cost - or that every piece of error handling code's benefit outweighs its cost - or the same for comments - then I'm pretty sure you're wrong.


    Again, cost benefits - if you are writing the code for the last-ditch killer-asteroid intercept mission, then you do things differently than if you are writing for a video game - obviously.

    Honestly I think your example is flawed, why would you test that subtraction works? Unit tests should be testing your (complicated) logic to confirm changes haven't broken it, not that the compiler our runtime still works as expected.

    With error handling, if my code needs to do something in the case of an error then I handle it, that might mean throw an exception or maybe close resources etc, but its the job of my code, not just rely on fixing it later when it's a problem, so I disagree with you strongly here tbh.

    Even with costs benefits, it doesn't have to be mission critical, for your company it is critical, you can't just trot out the old "no one died" when you discover a bug in production...


  • Registered Users Posts: 20,912 ✭✭✭✭Stark


    GreeBo wrote: »
    With error handling, if my code needs to do something in the case of an error then I handle it, that might mean throw an exception or maybe close resources etc, but its the job of my code, not just rely on fixing it later when it's a problem, so I disagree with you strongly here tbh.

    I agree strongly with fergalr here. Pedantically handling every possible error case can make code unreadable and slow down development time dramatically. You can't ignore error cases and always program for the happy path either but there is a balance to be struck. The reason we have exceptions in the first place is so we can bubble error conditions up the stack when we don't want to handle everything at a low level.
    GreeBo wrote: »
    Even with costs benefits, it doesn't have to be mission critical, for your company it is critical, you can't just trot out the old "no one died" when you discover a bug in production...

    You can't catch every single bug in a product. If it takes 3 months to get something out that's 95% bug free vs 6 months to get something out that's 100% bug free, it's generally better to get it out in 3 months and use the revenue gained to do follow up fixes. A single bug in production? Yeah I'd happily trot out "no-one died".

    I would disagree somewhat with fergalr on the unit testing strategy. In my experience an experienced project team and can code out with good coverage as quickly as code without coverage by leveraging the solid foundations approach to building a codebase quickly. So the cost is actually low if any. Of course it depends on developer experience. Inexperienced developers often write bad tests the first time they attempt unit testing and that comes with a cost just as writing bad production code comes with a cost. But any good developer should learn to write good tests.

    ⛥ ̸̱̼̞͛̀̓̈́͘#C̶̼̭͕̎̿͝R̶̦̮̜̃̓͌O̶̬͙̓͝W̸̜̥͈̐̾͐Ṋ̵̲͔̫̽̎̚͠ͅT̸͓͒͐H̵͔͠È̶̖̳̘͍͓̂W̴̢̋̈͒͛̋I̶͕͑͠T̵̻͈̜͂̇Č̵̤̟̑̾̂̽H̸̰̺̏̓ ̴̜̗̝̱̹͛́̊̒͝⛥



  • Registered Users Posts: 40,038 ✭✭✭✭Sparks


    Stark wrote: »
    You can't catch every single bug in a product.
    Technically, you can (effectively at least). However it's expensive so few do it. But "you can't" and "you don't want to pay for it" are different things.

    As to not handling errors... folks, please don't ever write a network stack or anything else low-level? M'kay? Life's hard enough for the rest of us as it is.


  • Subscribers Posts: 4,075 ✭✭✭IRLConor


    Stark wrote: »
    I agree that 100% coverage is an unfeasible target for unit test coverage, there are always pieces of code where it's more hassle than it's worth to write tests around.

    IMHO, that's a sign that you might have a design failure. Code that's hard to test is a big red flag for me.

    Yes, some languages will force you to write code in such a way that you end up with lots of hard-to-test corners but again that's less of an excuse for low coverage and more of a reason to avoid that language in the future.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    GreeBo wrote: »
    Honestly I think your example is flawed, why would you test that subtraction works?

    The example is an example of a method that I would not test.

    Because the complexity after adding a test, outweighs the benefit of testing this code.

    Once we all agree that such examples occur, we then are just arguing about where to draw the line. And thats my point - whether or not to test - much like whether or not to comment - is a judgement call, which depends on the context.


    (Secondly: The method calculated squared error, not subtraction.)


    GreeBo wrote: »
    Unit tests should be testing your (complicated) logic to confirm changes haven't broken it, not that the compiler our runtime still works as expected.

    Sure - but there's no clear and bright line between those two things.

    There's a sliding scale of method complexity. I think its obvious that the method I wrote doesn't need to be testing - i.e. that, as long as the compiler is correct, its going to work (or I can be sure enough of that that I dont need to test.)

    But, as I add lines to the method, I'll eventually reach a point of complexity where that's not the case, and where it makes sense to introduce a test.

    I should not test before that point, and I should test afterwards.

    The exact point will depend on context - what I'm building, the cost of bugs, the cost of complexity, etc.
    GreeBo wrote: »
    With error handling, if my code needs to do something in the case of an error then I handle it, that might mean throw an exception or maybe close resources etc, but its the job of my code, not just rely on fixing it later when it's a problem, so I disagree with you strongly here tbh.

    If you've a method A, that is only called from one other place in your video game, which you've just wrote, do you always write error handling in method A, for the case where its given bad parameters? I often don't, if its a small self contained project.


  • Advertisement
  • Registered Users Posts: 1,922 ✭✭✭fergalr


    GreeBo wrote: »
    Even with costs benefits, it doesn't have to be mission critical, for your company it is critical, you can't just trot out the old "no one died" when you discover a bug in production...

    This is a silly thing to say.

    Of course you can trot out the 'no one died' reason when you discover a bug in production.

    We would write our code VERY DIFFERENTLY if production bugs were likely to result in deaths.

    Formal verification, intense code review, formal specification, formal QA, checking multiple independent implementations for agreement - that sort of thing. Right? Hopefully this is obvious?


    We don't use those techniques making our video games, or our crud apps or whatever, because the costs do not outweigh the benefits.



    But of course, if you only had 8 hours to write the asteroid intercept code, or everyone died, then you'd be back to cutting corners again. Because, again, cost/benefit. Which depends on context.


Advertisement