Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

What is an adequate amount of commentary on code?

Options
13

Comments

  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Sparks wrote: »
    Technically, you can (effectively at least). However it's expensive so few do it. But "you can't" and "you don't want to pay for it" are different things.

    You can increase your confidence, but you can't ever be sure - which you are acknowledging when you say 'effectively at least' - so agreed there.
    Sparks wrote: »
    As to not handling errors... folks, please don't ever write a network stack or anything else low-level? M'kay? Life's hard enough for the rest of us as it is.

    It depends, it depends... depends on why you are writing it, whats the context... is it a proof of concept? For a new network analysis startup? Or the prototype new netcode that has to ship in a week? For an academic paper? Or is it going to be embedded into firmware? Maybe its the low level code that safeguards the nukes?

    You can't be dogmatic about this stuff. Maybe sometimes you want 'defense in depth' where every part of the system distrusts every other part, and checks the input it gets; other times that'll screw up the code so that its hard to see the logic of what your system is doing.

    What is considered best practice changes all the time - SaaS has changed a lot - you can fix an internal bug almost instantly, you don't have to ship CDs.

    Its like, when Nintendo used to ship a game on the SNES, they'd have to have found almost all the big bugs; now you can patch your game after release; changes like that in your deployment have got to impact how you handle and respond to errors, if you are doing things right.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    IRLConor wrote: »
    Stark wrote: »
    I agree that 100% coverage is an unfeasible target for unit test coverage, there are always pieces of code where it's more hassle than it's worth to write tests around.

    IMHO, that's a sign that you might have a design failure. Code that's hard to test is a big red flag for me.

    I don't even agree with this way of thinking about 'coverage'. What does '100% coverage' even mean? It might mean that every line of code is exercised by a test - but it doesn't mean that you have tested 100% of the possible states the system can be in.

    States are what we care about, not SLOC. And we can't test all the states, in any non-trivial stateful system.*

    I mean, you can always move to a functional setup, then you can actual start to think about those thinks. Maybe you are saying not using a functional setup is a design failure, Conor? If so, I have a badge you can have...


    *Famous CS paper: "Out of the tar pit" explains this stuff in great detail - not so hot on the second part of the paper explaining functional relational programming, but the intro to the problem is excellent, including great discussion on why tests only get you so far. http://shaffner.us/cs/papers/tarpit.pdf

    Even just read page 4 - so nice.


  • Closed Accounts Posts: 4,436 ✭✭✭c_man


    I was thinking of this thread today. A bug has brought me back to a piece of code I wrote when I was pretty green :) and for some reason I remember it pretty well. Anyways it's been made a further balls of since then from reviewing it... As a recent skeptic-turned-believer in UT and quasi-TDD, I applied to rewriting the bad boy! Grand, got a nice change going and had tests covering the one case that was reported failing and a few of the more common states it would encounter. So with production code changes done I started cranking out tests cases for all the other states... a bit in, I gave up. I couldn't be arsed. I don't think these states need to be covered.

    I just know this is going to be commented upon in code review, and I have no good answer bar I don't think it worth covering the other combos :)


  • Subscribers Posts: 4,075 ✭✭✭IRLConor


    fergalr wrote: »
    I don't even agree with this way of thinking about 'coverage'. What does '100% coverage' even mean? It might mean that every line of code is exercised by a test - but it doesn't mean that you have tested 100% of the possible states the system can be in.

    States are what we care about, not SLOC. And we can't test all the states, in any non-trivial stateful system.*

    I know. That's why I look at 80% line coverage as "at least 20% of this code is entirely untested". 100% line coverage is a starting point, not a goal.

    It's also why I'm more interested in testable code rather than the tests themselves. After all, you're going to find bugs in your code and you're going to have to write some sort of tests to isolate and understand them so you may as well help yourself up front. More testable code leads to shorter time to resolution.
    fergalr wrote: »
    I mean, you can always move to a functional setup, then you can actual start to think about those thinks. Maybe you are saying not using a functional setup is a design failure, Conor? If so, I have a badge you can have...

    I'm not a member of the Haskell Justice League, thanks. :p

    You can write functional code in damn near any language. In very many cases it's the correct decision too. I just don't like being forced to do it.


  • Registered Users Posts: 27,033 ✭✭✭✭GreeBo


    Testing branches is far more important than testing lines of code, branches imply business logic, lines of code imply nothing
    You could have 60% line coverage and not be testing anything bar your compiler / runtime.


  • Advertisement
  • Registered Users Posts: 20,894 ✭✭✭✭Stark


    IRLConor wrote: »
    IMHO, that's a sign that you might have a design failure. Code that's hard to test is a big red flag for me.

    Yes, some languages will force you to write code in such a way that you end up with lots of hard-to-test corners but again that's less of an excuse for low coverage and more of a reason to avoid that language in the future.

    Case in point I had a logger initialised in a static block in a class and a call to that logger as the only line in a method (it was part of an error handling interface). I was in a pedantic mood that day and I wanted 100% coverage so ended up adding powermock as a dependency and using the static class intercept functionality just for that one method. Successfully achieved full coverage but it was a silly waste of time and increase in complexity for the sake of chasing a magic number really.


  • Registered Users Posts: 40,055 ✭✭✭✭Sparks


    fergalr wrote: »
    You can increase your confidence, but you can't ever be sure
    Just on that point, we don't actually know that yet (and most of the Formal Methods research field would disagree with you, they just can't prove their case yet either :D )


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Sparks wrote: »
    Just on that point, we don't actually know that yet (and most of the Formal Methods research field would disagree with you, they just can't prove their case yet either :D )

    We start to get into tricky territory, going too far down this road - which I think is what you are alluding to.

    Its like, the physicists estimating the risk of a black hole from the LHC, put it at some extremely low number - lets say 1 in 10^50 or something; but someone else points out that we can't use that as a lower bound, because its now much more probable that the lower bound on our certainty is determined by things like [all the physicists being wrong about their paradigm] or [some subtle mistake that has never been spotted in the theoretical framework] rather than [the actual figure provided by the theoretical framework].

    And you can never get away from things like that; its kind of a bootstrapping problem...


  • Registered Users Posts: 40,055 ✭✭✭✭Sparks


    True for physics where we're guessing at the rules and checking our guesses and the system wasn't our invention. Formal methods goes the other way - we invented the system, proved the fundamentals and every step thereafter which is proven is basically irrefutable (there's no error bars involved - a mathematical proof is an all-or-nothing sort of thing for various reasons relating to the nature of maths and follow that rabbit hole into metaphysical argument only if you have nothing better to be doing with your time and have a bout of insomnia). Since FM's basic thesis is to find a way to create a mathematical expression of a given program to prove its correctness, if you could pull it off, you could then state that there are no bugs within the program without any qualifiers implied or explicit.

    Of course, FM hasn't yet developed methods or tools that let them do that on anything past a fairly trivial level yet and it's an open question if it's even possible, but thus far nobody's been able to prove that it cannot be done, only that it's desperately difficult (so was landing people on the moon in 1950, but we did it faster than someone in 1950 could produce a new voter, so saying something's impossible just because it's difficult should be something we've all learnt to avoid doing by now :D )


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Sparks wrote: »
    True for physics where we're guessing at the rules and checking our guesses and the system wasn't our invention. Formal methods goes the other way - we invented the system, proved the fundamentals and every step thereafter which is proven is basically irrefutable (there's no error bars involved - a mathematical proof is an all-or-nothing sort of thing for various reasons relating to the nature of maths and follow that rabbit hole into metaphysical argument only if you have nothing better to be doing with your time and have a bout of insomnia). Since FM's basic thesis is to find a way to create a mathematical expression of a given program to prove its correctness, if you could pull it off, you could then state that there are no bugs within the program without any qualifiers implied or explicit.

    Hmm - still not sure I agree here.

    Even if you prove something correct, there's still going to be gaps between [your mathematical model of the world] and [the world itself]. As long as the program has to interface with something external to itself, you'll have the question of bugs, even with very strong formal methods.


    I mean, also, at a practical level, people rarely *really* ever prove anything. Even mathematicians. I know machine checked proofs and so on attempts to eventually get away from this - but its hard to do anything other than the most simple stuff, when working at that level of abstraction. Maybe some day with strongAI we'll be able to apply that sort of technology to useful problems. Then we'll just have the problem above of gaps to address. But until then, its all just about sufficiently convincing proof sketches, which have their problems, as shown in this excellent example:

    http://googleresearch.blogspot.ie/2006/06/extra-extra-read-all-about-it-nearly.html
    It is not sufficient merely to prove a program correct; you have to test it too. Moreover, to be really certain that a program is correct, you have to test it for all possible input values, but this is seldom feasible. With concurrent programs, it's even worse: You have to test for all internal states, which is, for all practical purposes, impossible.


  • Advertisement
  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Here is mr rails talking about tests damaging designs etc; presumably now it'll be much easier to get people to believe testing isn't free, so thats good:

    http://david.heinemeierhansson.com/2014/test-induced-design-damage.html


  • Registered Users Posts: 11,977 ✭✭✭✭Giblet


    fergalr wrote: »
    Lets say you know that code is only going to be called with positive double values in the range between -1 and 1.
    public double getErrorInPredictions(double target, double prediction) {
        double distance = target - prediction;
        return distance * distance;
    }
    


    You say you know it's going to be called with a range of -1 and 1 but you don't guard against this. A simple unit test would have helped you write this method better. It's a public method, so you don't know the caller.

    It may sound obvious, but the error will probably crash the game, not return a glitchy result.

    getErrorInPredictions(Double.MinValue,1)

    I would argue that you probably don't need to write the tests first, but you need to refactor in some tests at some point.

    DHH above makes a point about TDD in rails, and controllers. I agree with him, in ASP MVC I do not unit test controllers either (bear in mind he still suggests integration tests!), but only because my controllers don't do anything that hasn't been covered already, so to test them is to retest what I've already tested and testing private methods doesn't make sense in a lot of cases.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Giblet wrote: »
    You say you know it's going to be called with a range of -1 and 1 but you don't guard against this.

    Think about what you just wrote for a second.

    Of course you shouldn't guard against things you know.


    Its the same reason you don't write a test to check integer assignment.


    You should guard against assumptions you have that might be wrong, where the chance of something violating the assumption, combined with the impact of the assumption being violated, outweighs the cost of checking the assumption. Sure.

    But if you know the method is only going to be called with a certain range, you don't guard against it.


    Programmers reading this might be thinking "Oh, haha, well, that's naive. You haven't worked in the real world. In the real world, your assumptions get violated all the time, because Johnny who inherits your code won't know your assumptions. blah blah"


    Well, its naive to be absolutist about these things, either direction. It depends on the situation. In my example, it was a 2 person project shipping a game. So, you are either the programmer writing the code that calls that method, or someone you work closely with is. And your game is going to ship or not ship soon, and probably that method will be thrown away next week anyway.

    So, if you write error handling the guards everything against everything, you just take too long to develop anything, and you've made the wrong cost/benefit tradeoff, and you are being a suboptimal developer.

    Again, the point I've been making, is that you'd make different decisions for the code you write for the asteroid intercept mission, and different decisions again, for the same mission, but if your launch window is closing in 24 hours.

    Giblet wrote: »
    A simple unit test would have helped you write this method better. It's a public method, so you don't know the caller.

    Public and private is just a way of formalising in code, what you do and don't know about who'll call your code. Its a way of putting boundaries around your assumptions, so you don't drown in error checking and documentation of interfaces.

    But its an imperfect capture of what programmers actually know in their head. It can't precisely model those risk boundaries, which have to do with things like organisational structure, which programmers are writing which code, passage of time, which code is more fertile ground for bugs etc.

    So its wrong to say 'oh, its a public method, so you dont know the caller'. The knowledge of whether or not I know the caller comes first. The public method is just an expression of that - an attempt to capture it. You've gotten the causality mixed up - your tail is wagging your dog.

    Giblet wrote: »
    It may sound obvious, but the error will probably crash the game, not return a glitchy result.

    getErrorInPredictions(Double.MinValue,1)

    Except it wont crash the game because that wont happen.


    Its like the karate kid. Best block is not be there.

    Best way to handle errors is not have errors.


    In practice you can't always do this, OF COURSE.
    There are interfaces where you don't have control over them, or where they are interacting with some fluid part of the system, or something where you can't trust what comes into the interface.

    For everything else, there's [just assuming things are correct].

    int x = 50;
    int y = 20;
    
    
    if ((x*y) > 50000) {
        //handle error
    }
    

    See the code above in a CRUD app? Fire the programmer.

    See the code above in your micro satellite code? Well, you probably want to have a better discussion about how you handle cosmic radiation interference, but its a discussion, not a termination interview.

    Giblet wrote: »
    I would argue that you probably don't need to write the tests first, but you need to refactor in some tests at some point.

    Yes, generally 'some tests at some point' is very easy to get behind. I agree.
    Giblet wrote: »
    DHH above makes a point about TDD in rails, and controllers. I agree with him, in ASP MVC I do not unit test controllers either (bear in mind he still suggests integration tests!), but only because my controllers don't do anything that hasn't been covered already, so to test them is to retest what I've already tested and testing private methods doesn't make sense in a lot of cases.

    You see - its all about knowledge boundaries. Its not about public and private or about testing everything or tests being free. Its about cost vs benefit, and basic risk analysis.


  • Registered Users Posts: 40,055 ✭✭✭✭Sparks


    fergalr wrote: »
    Think about what you just wrote for a second.
    Of course you shouldn't guard against things you know.
    Well, strictly the problem there isn't the words you highlighted, but the one I've underlined. And the example you've given to say it's not always so is incredibly esoteric btw - two people working on a small chunk of code with a one-shot-and-forget ship is a really astonishingly rare sort of project. The vast vast majority of coding doesn't work that way. And frankly, even if there was just one other coder on the project, I'm still sticking an assert in there because paranoia's a job skill :D


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Sparks wrote: »
    Well, strictly the problem there isn't the words you highlighted, but the one I've underlined.

    If you know a method will only be called with certain parameters, then that's all there is to it.

    It doesn't matter what other people know, that's a second order thing that's fully captured, for the purposes of the decision about whether to write a guard, in your estimate of the interface being violated.

    Sure, you might be wrong, but then you didn't really know what you thought you did. But that's all you need to capture the decision to reason correct about in probabilistically. What you should pay to avoid the risk is the cost of it happening times the estimated probability of it happening.

    Might be fun to formalise all that, in this context, but someone has probably done it already.

    Sparks wrote: »
    And the example you've given to say it's not always so is incredibly esoteric btw - two people working on a small chunk of code with a one-shot-and-forget ship is a really astonishingly rare sort of project.

    You are overstating that. Its certainly not 'incredibly esoteric', for two people to work on a small chunk of code, that might be shipped, or canned etc.

    It mightn't be the bulk of programming, but it happens a lot. It even happens within big organisations.


    Now, all the big projects you work on might have started out as badly designed small projects which everyone thought would be deleted next year, but now power the economies of nations, and, uh-oh.

    But that doesn't mean that most small projects are going to go on to power the economies of nations. (thats prosecutors fallacy of course). And you can't set out to write small projects, as if that's what they were all likely to do - not if you want them to succeed with the resources they have.
    Sparks wrote: »
    The vast vast majority of coding doesn't work that way. And frankly, even if there was just one other coder on the project, I'm still sticking an assert in there because paranoia's a job skill :D

    Its a balance though - the best choice isn't the paranoid one, in a lot of circumstances. Yes, that totally depends on the organisation, the context, etc, but thats my point - I'm not talking about extreme edge cases.


  • Technology & Internet Moderators Posts: 28,791 Mod ✭✭✭✭oscarBravo


    fergalr wrote: »
    Its a balance though - the best choice isn't the paranoid one, in a lot of circumstances. Yes, that totally depends on the organisation, the context, etc, but thats my point - I'm not talking about extreme edge cases.

    What would it cost to stick an "assert" in there, though? If you know for a fact that the parameters will always be in range, then there are no side effects of the assertion, and you've covered yourself if what you know for a fact ever changes in the future.

    I've made assumptions in code because I knew for a fact what the bounds of the function arguments were. Those assumptions have come back to bite me five years later when, due to the scope of the overall project having moved on, the assumptions were no longer true. I'd rather have to deal with an "AssertionError at line x" than a subtle "wat" situation that requires extensive debugging. And I'm speaking as someone who has been a solo developer for the best part of a decade; I only have my own assumptions to deal with.


  • Registered Users Posts: 27,033 ✭✭✭✭GreeBo


    fergalr wrote: »
    If you know a method will only be called with certain parameters, then that's all there is to it.

    I dont understand how you can state that you "know" it wont be called with negative values?
    Its public...in 4 years time it could be living as some compiled jar somewhere that someone else starts to use...how do you know whats going to happen in the future?

    Honestly I think your previous answer of "you havent worked in the real world" is correct, in the real world anyone can call a public method with anything, sometimes 10 years after you originally wrote it..


  • Registered Users Posts: 20,894 ✭✭✭✭Stark


    GreeBo wrote: »
    I dont understand how you can state that you "know" it wont be called with negative values?
    Its public...in 4 years time it could be living as some compiled jar somewhere that someone else starts to use...how do you know whats going to happen in the future?

    Honestly I think your previous answer of "you havent worked in the real world" is correct, in the real world anyone can call a public method with anything, sometimes 10 years after you originally wrote it..

    Ah here now. The public modifier only means that the method is callable by methods outside the class. It doesn't mean "accessible by anyone anywhere". Most of the public methods in our codebases are only called from private methods within classes in the same project, not accessible from outside the project at all. And if someone does decide to hack around things and instantiate that class directly and start calling methods on it, well they deserve what they get.

    Now I'm not advocating that we should be flippant about making assumptions, but making a judgement call as to how low level or high level a method is purely based on the visibility modifier is a bit silly. You can't know at what level the method is going to be used without more context. If the method was a high level method, yes put in guards. If the method is a low level method and there are guards higher up the stack, then no point in repeating those guards.

    Eg:
    class UserService {
       public User getUser(String userId) {
          checkNotNull(userId);
          return userDao.getUser(userId);
       }
    }
    
    class UserDao {
       public User getUser(String userId) {
          checkNotNull(userId);    // Do we really need it again!?
          return blahBlahBlah();
       }
    }
    


  • Registered Users Posts: 11,977 ✭✭✭✭Giblet


    I never ever use public to mean "the actual public"...

    Callable by anyone of course assumes it's part of a library being consumed, even internally, and the call is not encapsulated.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    oscarBravo wrote: »
    What would it cost to stick an "assert" in there, though? If you know for a fact that the parameters will always be in range, then there are no side effects of the assertion, and you've covered yourself if what you know for a fact ever changes in the future.

    Sure, 'asserts' are often very cheap, and, as such, worth putting in.

    Still not free, though.

    If you've data coming through your system, and you assert the same thing about that data at 5 different points as it goes down through your system, then, if you want to change something about the data, you've got 5 asserts to change.

    E.g. if the range you want to support changes from [-1 to 1] to [-10 to 10]


    Of course, that depends on what you chose to assert - did you assert the incoming number was between [-1 and 1] (the domain defined allowable values) or maybe between a bigger range, the largest range of values for which the function thats the limiting factor on the range of values could work? Or maybe you chose some combination between the two? The largest range that you could ever see making sense, but that is still well within the range for which the function will work correctly?

    The more generous your assert is, in terms of what it allows through, then the more scope you'll have to change the behaviour of the inner functions, without having the change your asserts. Of course, the more generous the assert is, the less it is doing.
    oscarBravo wrote: »
    I've made assumptions in code because I knew for a fact what the bounds of the function arguments were. Those assumptions have come back to bite me five years later when, due to the scope of the overall project having moved on, the assumptions were no longer true. I'd rather have to deal with an "AssertionError at line x" than a subtle "wat" situation that requires extensive debugging. And I'm speaking as someone who has been a solo developer for the best part of a decade; I only have my own assumptions to deal with.

    Yeah, we've all done this. That doesn't mean we should put guards or asserts everywhere though. It depends.

    This is similar to the example Stark gave below, but you know, lets say you are the UI code, do you check the username coming from the database is in canonical form? Or do you assume the username will have been canonicalised before it goes in the database?

    If you do the former, you're slightly 'safer' in some sense - its more 'defence in depth'. You are certainly safer if someone violates the assumptions for the database.
    But if you put such checks everywhere, at every level - and especially if you try and handle the error, rather than just asserting (sometimes 'asserting' doesn't make sense for your production system (but thats another discussion)) then you can end up with code that has a mess of unnecessary error handling.

    Whether you should do that depends on how likely you think the code will be running in 5 years, how likely it'll be you have to change all your asserts, the costs each way, etc.


  • Advertisement
  • Registered Users Posts: 1,922 ✭✭✭fergalr


    GreeBo wrote: »
    I dont understand how you can state that you "know" it wont be called with negative values?
    Its public...in 4 years time it could be living as some compiled jar somewhere that someone else starts to use...how do you know whats going to happen in the future?

    Honestly I think your previous answer of "you havent worked in the real world" is correct, in the real world anyone can call a public method with anything, sometimes 10 years after you originally wrote it..

    Stark's post answers this really well, in my opinion.

    Like, 'knowing X' is believing that the probability of X being true is very high. Depending on how high that probability is, and the costs either way, you make your judgement call.

    If you get dogmatic about it in either direction, you get bad code.

    One way, you end up with fragile code that isn't robust against unanticipated user input, say.
    The other way, you end up with code that checks the username is correct at every single step in the system; if you have enough such checks to do, when you look at your code, instead of simple clear methods, you see 20 lines of error checking and handling for a method that then does 3 lines of actual work, and you see this for all of your methods everywhere, and the code can't be easily changed.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Giblet wrote: »
    I never ever use public to mean "the actual public"...

    Callable by anyone of course assumes it's part of a library being consumed, even internally, and the call is not encapsulated.

    Not 100% clear on what you are saying there.

    In practice, in real systems, methods that are 'public' are often encapsulated by the rest of the system, and aren't exposed to people who will (e.g. adversarially) abuse their implicit interfaces.

    But you generally shouldn't write every 'public' method as if it was a library call, with the robust level of error handling, checking and documentation that that would involve.

    I mean, this might not be the case if you were writing code that you knew wasn't going to change over the course of your project, and you knew all that work making your code library quality made sense, and there was a big importance of defence in depth because of your domain etc.


  • Registered Users Posts: 20,894 ✭✭✭✭Stark


    A lot of this discussion reminds me of the arguments around checked exceptions vs unchecked exceptions:

    http://tutorials.jenkov.com/java-exception-handling/checked-or-unchecked-exceptions.html

    In short: for checked exceptions: everything's explicit, against: polluting your stack with throws declarations, makes it difficult to later make changes to your code.


  • Registered Users Posts: 40,055 ✭✭✭✭Sparks


    fergalr wrote: »
    If you've data coming through your system, and you assert the same thing about that data at 5 different points as it goes down through your system, then, if you want to change something about the data, you've got 5 asserts to change.
    Well, yes, that's the point.
    (a) Changing the data is changing the design and that's not free so you expect a cost; and (b) the point of the asserts is that you change the code 5 times, not 4!
    Whether you should do that depends on how likely you think the code will be running in 5 years, how likely it'll be you have to change all your asserts, the costs each way, etc.
    Well, yes, but games programming aside, not many of us have the luxury of assuming our code has a short lifespan and won't ever get repurposed, relinked, have its internal APIs abused by direct access, or worst yet, ctrl-c-ctrl-v'd into something else along with the lovely comment identifying us as the original author but not the idiot who made a patchwork function without knowing the original assumptions.


  • Registered Users Posts: 20,894 ✭✭✭✭Stark


    Sparks wrote: »
    Well, yes, that's the point.
    (a) Changing the data is changing the design and that's not free so you expect a cost; and (b) the point of the asserts is that you change the code 5 times, not 4!

    Why exactly? Good code should have separation of concerns and be easy to change.

    If the business for example specifies that the largest payment a payment system should be able to process is say 500 euro to safeguard against an administrator making a mistake and bankrupting the company, then that magic 500 number should appear in at most a couple of places in the code. If in the future the business decides due to inflation that that number should be 750, the programmer shouldn't be turning around and saying "sorry, we deliberately made the cost of changing that validation high, we'll have to touch every part of our codebase and regress everything". It shouldn't be your concern if someone down the line takes your code and decides to make a payment system that processes 10,000 euro payments out of it. Your concern is making code that works well for its current purpose and is easy to change if requirements change in the future.


  • Registered Users Posts: 40,055 ✭✭✭✭Sparks


    Stark wrote: »
    Why exactly? Good code should have separation of concerns and be easy to change.
    Yes, it should and that's why we have things like APIs and interfaces and so on; and we're talking about changing those interfaces here, not tweaking a magic number inside a module (which is a whole other can of rotting maggots).


  • Registered Users Posts: 20,894 ✭✭✭✭Stark


    Sparks wrote: »
    with the lovely comment identifying us as the original author but not the idiot who made a patchwork function without knowing the original assumptions.

    Tbh, this is a good argument against having silly comments to say who wrote a piece of code. The original author of a piece of code often isn't the maintainer and it's wasteful to comment the code with audit information when we have SCM systems to indicate who wrote a file or changed a file.


  • Registered Users Posts: 40,055 ✭✭✭✭Sparks


    Stark wrote: »
    Tbh, this is a good argument against having silly comments to say who wrote a piece of code.
    I don't write such comments, I leave it to version control to do that.
    I'm talking about our V%d[Esc]:bnp'er writing a comment to say "based on X's code here:" or something slimely equivalent before jumping ship (yes, it's happened to me).


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Sparks wrote: »
    Well, yes, that's the point.
    (a) Changing the data is changing the design and that's not free so you expect a cost;

    Absolutely - but what I'm saying is, that there is a benefit to minimising the cost of changes, and a cost to anything that increases the cost of changes.

    In many situations, where codebase is dynamic, certain tests/guards/etc have a cost outweighing the benefit.

    Other tests/guards/etc are even more valuable when the codebase is dynamic.
    Sparks wrote: »
    and (b) the point of the asserts is that you change the code 5 times, not 4!

    And asserts are often great, for just that reason. I'm not saying 'never use asserts'. I'm saying 'think about whether the particular guard is worthwhile'.


    Again, the overarching point I'm trying to make, is that:

    Bad: "Its the right/best thing to do to write guards on any public method, so thats why I'm going to write a guard here."

    Good: "Ok, I've wrote this public method. Does it make sense to write a guard in this case? Is it likely to be worth doing here, given what I know about how this is likely to be used?"
    Sparks wrote: »
    Well, yes, but games programming aside, not many of us have the luxury of assuming our code has a short lifespan and won't ever get repurposed, relinked, have its internal APIs abused by direct access, or worst yet, ctrl-c-ctrl-v'd into something else along with the lovely comment identifying us as the original author but not the idiot who made a patchwork function without knowing the original assumptions.

    Sure, if you are in a situation where the code is going to be reused a lot, you do things differently.

    Still, though, even in enterprise settings, as a project evolves, it'd be interesting to know what percentage of code thats written actually survives a long time. I remember working in enterprise settings, a surprising amount of projects and code were discarded or deleted within in a few months after they were written.


  • Advertisement
  • Registered Users Posts: 40,055 ✭✭✭✭Sparks


    fergalr wrote: »
    I remember working in enterprise settings, a surprising amount of projects and code were discarded or deleted within in a few months after they were written.
    Yeah, but on the other hand I work with two programmers - out of a team of five, and they're not fresh from college - who aren't as old as some of the code they've worked on. The rule for things like this is that the longer something's been around, the further away its end-of-life is.


Advertisement