Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

What is an adequate amount of commentary on code?

Options
124»

Comments

  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Sparks wrote: »
    The rule for things like this is that the longer something's been around, the further away its end-of-life is.

    :)

    Actually, kind of a technical point, but that's not actually incompatible with thinking that most code been written is going to be thrown away soon.
    Might seem paradoxical, but lots of things are - totally logically - like that.

    This is making me curious about whether people have studied these things - the distribution of the lifetime of a line of code, etc.


  • Technology & Internet Moderators Posts: 28,792 Mod ✭✭✭✭oscarBravo


    fergalr wrote: »
    E.g. if the range you want to support changes from [-1 to 1] to [-10 to 10]

    Just zooming in on this: the only reason I'd assert a range of values for a parameter is because the function's logic makes assumptions based on that contract. If the function is written in such a way as not to make any difference - the example function you provided is a case in point - then there's nothing to assert, because it will just work the same way.

    I worked on some code this week that, inter alia, does duration calculations on time values. If I can be confident that the first argument will always be earlier than the second, I can take some shortcuts in how the code is written. But if the arguments are ever the other way around, those shortcuts break the code.

    I'm making the case that in such circumstances, the correct approach is to assert the parameter ordering at the very start of the function. Then, in two years' time, when I decide to call the function with arbitrary times (forgetting the assumptions), instead of getting some really strange results that I have to hunt down, I get an assertion error, which reminds me of the need to either (a) pass the parameters in the correct order, or (b) remove the assertion, and rewrite the function to handle more cases.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    oscarBravo wrote: »
    Just zooming in on this: the only reason I'd assert a range of values for a parameter is because the function's logic makes assumptions based on that contract. If the function is written in such a way as not to make any difference - the example function you provided is a case in point - then there's nothing to assert, because it will just work the same way.

    I worked on some code this week that, inter alia, does duration calculations on time values. If I can be confident that the first argument will always be earlier than the second, I can take some shortcuts in how the code is written. But if the arguments are ever the other way around, those shortcuts break the code.

    I'm making the case that in such circumstances, the correct approach is to assert the parameter ordering at the very start of the function. Then, in two years' time, when I decide to call the function with arbitrary times (forgetting the assumptions), instead of getting some really strange results that I have to hunt down, I get an assertion error, which reminds me of the need to either (a) pass the parameters in the correct order, or (b) remove the assertion, and rewrite the function to handle more cases.

    That all makes total sense.


    What about the case where you take the times from the database, and the SQL query returns them in sorted order?

    Then you pass them through a helper function that increments them all by one.

    Finally, you run the method you mentioned that relies on them being sorted.

    Do you assert that they are sorted in every method? Check that they are still sorted when they come back from every other method?


    Sometimes that would make sense - if defense in depth is important.

    But other times, its a bad idea - your code just ends up looking like microsoft in this picture:

    http://usingapple.com/wp-content/uploads/2011/06/Organizational-Chart-for-Apple-Amazon-Facebook-Google-Microsoft-and-Oracle.jpg


  • Registered Users Posts: 8,219 ✭✭✭Calina


    fergalr wrote: »
    Still, though, even in enterprise settings, as a project evolves, it'd be interesting to know what percentage of code thats written actually survives a long time. I remember working in enterprise settings, a surprising amount of projects and code were discarded or deleted within in a few months after they were written.

    I guess it depends on your enterprise context. I have worked on code that is older than myself on a codebase that I believe even now is relatively stable.

    On a vaguely related note, what might be more interesting is the underlying "why" stuff gets ripped out so quickly. Ultimately, I had a conversation on older code with someone lately - I tend to be of the opinion that older code that's still in place doing its job was well designed and specified at the outset; he made the point that anything which wasn't tended to get ripped out very quickly so by definition, if it survived that long...

    Maybe it's something as normal but...it strikes me as a waste of resources if quite a bit of work is getting discarded or deleted within a few months. I'd consider efficiency gains might be possible there. That's why I'd be more interested in the underlying reasons than perhaps the quantity.


  • Technology & Internet Moderators Posts: 28,792 Mod ✭✭✭✭oscarBravo


    fergalr wrote: »
    That all makes total sense.


    What about the case where you take the times from the database, and the SQL query returns them in sorted order?

    Then you pass them through a helper function that increments them all by one.

    Finally, you run the method you mentioned that relies on them being sorted.

    Do you assert that they are sorted in every method? Check that they are still sorted when they come back from every other method?


    Sometimes that would make sense - if defense in depth is important.
    I guess I'd just summarise by saying that if a function makes assumptions about its parameters that it relies on in order to reliably produce a result, the function should assert those assumptions. If it's not asserting the assumptions, it shouldn't rely on them.
    But other times, its a bad idea - your code just ends up looking like microsoft in this picture:

    http://usingapple.com/wp-content/uploads/2011/06/Organizational-Chart-for-Apple-Amazon-Facebook-Google-Microsoft-and-Oracle.jpg

    Heh, I like that one :)


  • Advertisement
  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Calina wrote: »
    I guess it depends on your enterprise context. I have worked on code that is older than myself on a codebase that I believe even now is relatively stable.

    On a vaguely related note, what might be more interesting is the underlying "why" stuff gets ripped out so quickly. Ultimately, I had a conversation on older code with someone lately - I tend to be of the opinion that older code that's still in place doing its job was well designed and specified at the outset; he made the point that anything which wasn't tended to get ripped out very quickly so by definition, if it survived that long...

    Maybe it's something as normal but...it strikes me as a waste of resources if quite a bit of work is getting discarded or deleted within a few months. I'd consider efficiency gains might be possible there. That's why I'd be more interested in the underlying reasons than perhaps the quantity.

    I strongly disagree with the mindset behind that post. (Note - nothing personal here, and don't mean to be antagonistic - just disagree with the way of thinking).


    You are basically arguing that its be more efficient if everything was written in a more bulletproof way at the start.

    Like, as a developer, I have sympathy for that worldview. But I think its wrong.

    Code in practice doesn't get mostly thrown away because its not high enough quality. Instead it gets thrown away because its solving the wrong problem; because it doesn't meet the actual business needs; because it gets outdated when things change; because it takes too long to ship and the project gets cancelled.


    People write a lot of crap about 'agile'. But the key idea is that if you do things with a bit less process, and get earlier versions out a bit quicker, you'll spend less time writing stuff that's wrong.

    And that's a really good idea.

    Sacrificing code 'quality' in return for speed of development (even short term speed of development) makes a lot of sense. Because you get to test the risky assumptions quicker.



    This is even true in a purely coding sense, even without shifting business requirements: you often don't know what the right way to build something is when you start, so it makes a lot of sense to 'hack' a prototype; you'll probably do several such prototypes, and throw away and refactor a lot of parts of a hard project, before you really figure out how to build it. Filling your code with tests and guards while you go through that process, slows you down, and makes things less efficient overall.
    Calina wrote: »
    On a vaguely related note, what might be more interesting is the underlying "why" stuff gets ripped out so quickly. Ultimately, I had a conversation on older code with someone lately - I tend to be of the opinion that older code that's still in place doing its job was well designed and specified at the outset; he made the point that anything which wasn't tended to get ripped out very quickly so by definition, if it survived that long...

    Again, I think the causality usually goes the other way around. Usually, if something has survived a long time, and is still being used, its because its doing something important enough that it makes sense to maintain it and make it robust.

    Its not because its particularly easy to maintain up front or robustly designed. That is a factor too, but its rarer. At least thats my understanding.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    oscarBravo wrote: »
    I guess I'd just summarise by saying that if a function makes assumptions about its parameters that it relies on in order to reliably produce a result, the function should assert those assumptions. If it's not asserting the assumptions, it shouldn't rely on them.

    In general, thats a rule most people would agree with.

    But you didn't answer the more specific scenario I gave.
    If you take that general rule, and apply it to a system where the, e.g. user input gets passed down 10 levels of abstraction (i.e. 10 method calls) into the system, and if you call the helper functions to canoicalise/escape/security-check/check-non-null/guard that user input, and apply each of these guards at every one of the 10 steps, rather than just at some external system boundary, then you are probably doing things wrong.

    Its a sensible general rule, but if you apply it everywhere without regard for cost, your code will be suboptimal.


  • Registered Users Posts: 240 ✭✭mrkite77


    c_man wrote: »
    I'm of the minimalist school of thought when it comes to commenting

    Same. I tend to comment more on what's *not* in the code. Explaining that I, too, thought of that super awesome optimization for this part of code, but it won't work because of xxxx.

    It's especially useful for Future Me, because otherwise I'll look at that old code and have the exact same idea for improving it, and not remember that I had already tried that.

    There are some things I'll always comment though.

    I'll comment any complex regexs, especially since things like negative lookbehind assertions tend to be confusing and require people to stop and logic through them in order to understand what is going on.

    I'll also comment any magic numbers used. I never use a constant for a magic number that's only used once.. that just obfuscates things and spreads your code out.

    Here's an example:
    	//locate end of central directory record
    	qint64 ziplen=f.size();
    	qint64 maxECDlen=0xffff + 2 + 20; //max comment + comment len + ECD
    	if (maxECDlen>ziplen) //zip is shorter?
    		maxECDlen=ziplen;
    
    	f.seek(ziplen-maxECDlen); //ECD must be after this
    
    	QByteArray data=f.read(maxECDlen);
    	const quint8 *p=(const quint8 *)data.constData();
    
    	bool found=false;
    	//now scan this data for the ECD signature
    	for (qint64 i=0;i<maxECDlen-20;i++,p++)
    	{
    		if (p[0]==0x50 && p[1]==0x4b && p[2]==0x05 && p[3]==0x06)
    		{
    			found=true;
    			break;
    		}
    	}
    	if (!found) //no ecd found, probably not a zip
    	{
    		f.close();
    		return false;
    	}
    
    	//awesome, now to find the central directory
    ....
    

    This is part of code that reads a .zip file. I comment roughly what I'm doing, but I don't go into too much detail because if you're going to modify this code, you damn well better have the .zip specification open in front of you.


  • Technology & Internet Moderators Posts: 28,792 Mod ✭✭✭✭oscarBravo


    fergalr wrote: »
    If you take that general rule, and apply it to a system where the, e.g. user input gets passed down 10 levels of abstraction (i.e. 10 method calls) into the system, and if you call the helper functions to canoicalise/escape/security-check/check-non-null/guard that user input, and apply each of these guards at every one of the 10 steps, rather than just at some external system boundary, then you are probably doing things wrong.
    That's the sort of situation where, rather than assuming and asserting, I'd stop assuming and start checking instead.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    oscarBravo wrote: »
    That's the sort of situation where, rather than assuming and asserting, I'd stop assuming and start checking instead.

    That doesn't change the core issue.

    Either
    1) you do your checking in every one of your functions, which is consistent with your stated policy of each function guarding its input; but which means every function becomes full of duplicate error checking/assertions/[your-guard-method-of-choice]

    or

    2) you only check at some other sort of boundary - e.g. system boundaries, trust boundaries - in which case you admit your earlier principle is not applied absolutely in practice.


  • Advertisement
  • Technology & Internet Moderators Posts: 28,792 Mod ✭✭✭✭oscarBravo


    fergalr wrote: »
    That doesn't change the core issue.

    Either
    1) you do your checking in every one of your functions, which is consistent with your stated policy of each function guarding its input; but which means every function becomes full of duplicate error checking/assertions/[your-guard-method-of-choice]

    or

    2) you only check at some other sort of boundary - e.g. system boundaries, trust boundaries - in which case you admit your earlier principle is not applied absolutely in practice.

    Well, no. The point I made is that if a function makes an assumption about its parameters, it should assert the assumption. If it's not going to assert the assumption, it shouldn't make it.

    You gave the example of the function that makes the assumptions about its parameters being ten calls deep from the initial point at which the values were derived, and you seem to be suggesting that every function has to assert the order of the parameters on behalf of the one that makes the assumption. I don't know where you're getting that: I'm talking about one single function that relies on the assumption of its parameter ordering, and that's the function that needs to assert that assumption. The other nine function calls in the stack above it don't make any assumptions, because what they're doing to the parameters doesn't rely on any such assumption. Or, if it does, then they should assert it too.

    Back to my original example: there is one function that assumes the two time values it is passed are in a particular order. That one function relies on that assumption in order to take some shortcuts in how it does its calculations, and should assert that assumption.

    The function that calls that one makes no such assumption. It doesn't care about the parameter order, and will work in precisely the same way whether they are in ascending or descending order. Why should it assert the sequence? That doesn't make any sense.

    I never suggested that every function should guard its inputs; I said that if the function relies for correct operation on an assumption about its inputs, then it's a good idea for that function to assert the assumption. That's all.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    oscarBravo wrote: »
    Well, no. The point I made is that if a function makes an assumption about its parameters, it should assert the assumption. If it's not going to assert the assumption, it shouldn't make it.

    You gave the example of the function that makes the assumptions about its parameters being ten calls deep from the initial point at which the values were derived, and you seem to be suggesting that every function has to assert the order of the parameters on behalf of the one that makes the assumption. I don't know where you're getting that: I'm talking about one single function that relies on the assumption of its parameter ordering, and that's the function that needs to assert that assumption. The other nine function calls in the stack above it don't make any assumptions, because what they're doing to the parameters doesn't rely on any such assumption. Or, if it does, then they should assert it too.

    Back to my original example: there is one function that assumes the two time values it is passed are in a particular order. That one function relies on that assumption in order to take some shortcuts in how it does its calculations, and should assert that assumption.

    The function that calls that one makes no such assumption. It doesn't care about the parameter order, and will work in precisely the same way whether they are in ascending or descending order. Why should it assert the sequence? That doesn't make any sense.

    I never suggested that every function should guard its inputs; I said that if the function relies for correct operation on an assumption about its inputs, then it's a good idea for that function to assert the assumption. That's all.


    Lets say you are building a system that works with data, that typically has a certain constraint.


    The constraint could be 'the data points are in increasing order', like you mention.
    Equally, it could be 'there are no cycles in this graph data', which is a little more computationally costly to check.

    Lets say you have something you want to do on this data: a complex transform, that relies on the use of many algorithms, each expressed as a separate function, each of which relies on that constraint, and each of which preserves that constraint, that you want to run in sequence on this data.


    Sometimes, it makes more sense to just check the constraint once, at the system boundary, and then run all the functions, each blindly assuming the constraint is true.
    The more complex or otherwise expensive the constraint is to check/assert, or the more checks I have to do, the more likely it is that this makes sense.


    Even if I was building a system where the constraint was just that the data points are in sorted order, where I had many functions that were going to rely on this constraint, I could definitely see myself checking the constraint once at the system boundary, and then just relying on it thereafter, without asserting/checking this at every function boundary. Especially if it was something a bit more costly to check.



    If someone agrees that, yes, this often makes sense to do, for reasons of computational efficiency, code clarity, system design, etc, then we are in agreement.


    If they dogmatically say that each of the functions that rely on this constraint should check it, pretty much regardless, then we are not in agreement.


    In extremis, I'll be able to build systems that sometimes complete the task in the required computational time, while they won't.


    In more prosaic cases, I'll be able to write systems which are coded in a much clearer way inside the system boundary.



    If I take this too far, and screw up, I'll not do error checking when I should, and I'll have a function who is making an assumption that is being violated.


    But in many case that risk makes sense to take.


  • Technology & Internet Moderators Posts: 28,792 Mod ✭✭✭✭oscarBravo


    If the assumptions about the parameters form an intrinsic part of the system's design, then yes: it's probably fair to check them only once, because if the assumptions are that intrinsic, then there isn't going to be a reason to suddenly change that assumption mid-system.

    So yes, for the edge case you've managed to contrive, it would make sense not to bother asserting your assumptions.

    Ironically, this whole conversation started from a situation where you described a function as always having values between -1 and 1, then provided code that didn't care about those constraints, and finally gave sample argument values that weren't even in that range! So I guess it's doubly true that you shouldn't assert your assumptions if those assumptions are invalid to start with.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    oscarBravo wrote: »
    If the assumptions about the parameters form an intrinsic part of the system's design, then yes: it's probably fair to check them only once, because if the assumptions are that intrinsic, then there isn't going to be a reason to suddenly change that assumption mid-system.

    So yes, for the edge case you've managed to contrive, it would make sense not to bother asserting your assumptions.

    I don't think its a contrived edge case. I think its really common to check or santise things at boundaries, and then write code inside the boundary that relies on those assumptions; I think that often is the best way to write things.

    oscarBravo wrote: »
    Ironically, this whole conversation started from a situation where you described a function as always having values between -1 and 1, then provided code that didn't care about those constraints, and finally gave sample argument values that weren't even in that range! So I guess it's doubly true that you shouldn't assert your assumptions if those assumptions are invalid to start with.

    I constructed the function where you know its input is between -1 and 1 as hypothetical example to make the point clearly that you shouldn't always check things.

    This might occur in practice because your parameters have been mapped into that range elsewhere in the system.
    You seem to have agreed that in that case, you don't need to check/assert/guard them again, so no further disagreement there.


    I agree that there's benefit to having your 'inner' code be robust, and fail predictably, if the assumptions are violated.

    But there's a cost to writing the extra code to check those assumptions, to maintaining it, and to looking at it in your codebase.

    On the other hand, if you don't write it, then you've got to remember, if you change the outer stuff, to make sure to check whether the inner stuff needs changing too. There are costs both ways, and tradeoffs to make.


    When I write code, I try not to really determinedly follow rules. I think about whether it makes sense to do these things as I do them. It'll depend on what the code is for, how fluid it is, how tricky it is, if this method is a part of the system I think will last a long time, if I'm planning on re-writing, how likely the system is to be canned, the impact of errors on the system, the cost of errors etc. Its a quick and heuristic process. I'm sure I make mistakes, but I'm pretty convinced its the right way to go about these things.


  • Registered Users Posts: 40,038 ✭✭✭✭Sparks


    fergalr wrote: »
    I don't think its a contrived edge case. I think its really common to check or santise things at boundaries, and then write code inside the boundary that relies on those assumptions; I think that often is the best way to write things.
    With the possible exception of writing code that nobody else will ever work on (and I'm including Future-You-Who-Hasn't-Seen-This-Codebase-In-Over-Six-Months in the "other people" bracket), I think that's not a very good way to write code.

    That's not saying that nobody does it, but people who write this way seem to create a lot of work six months down the line for everyone else on the team, or worse, people not on the team who are using their libraries/products.

    The paranoid people who write asserts (or whatever equivalent they have) in even their smallest functions will at least trigger panics or traps or coredumps or whatever mechanism is in place for error handling in their system and that will at least let people know there's an error; those who don't may teach everyone yet another valuable painful lesson about what the words "undefined behaviour" in the language standard really mean.

    Whether or not they write test suites to exercise those asserts, that's an arguable point (no, that one really is), but what you're describing is similar in concept to open loop control (which is usually introduced with the example of driving down the M50 in a car while the windows are painted black and you have a map, a stopwatch and a speedometer to navigate by - everything's fine until the moment something unexpected happens or you make even the smallest of mistakes).


  • Technology & Internet Moderators Posts: 28,792 Mod ✭✭✭✭oscarBravo


    One other thought occurred to me: if the parameters that are being passed ten calls deep into the system while potentially being altered at every step must retain various constraints - such as forming a DAG - then I'd almost certainly either find or write a class that encapsulates that behaviour and enforces it itself, and pass an instance of that class as the parameter.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    oscarBravo wrote: »
    One other thought occurred to me: if the parameters that are being passed ten calls deep into the system while potentially being altered at every step must retain various constraints - such as forming a DAG - then I'd almost certainly either find or write a class that encapsulates that behaviour and enforces it itself, and pass an instance of that class as the parameter.

    That's often a good idea - leverage the type system etc.

    Anyway, I've nothing more to add - I'll be resting my case, so to speak - interesting to discuss though, thanks.


  • Registered Users Posts: 232 ✭✭lemon_remon


    I didn't read the full discussion (it was long sorry!) but it sounds to me like you guys are arguing "Look Before You Leap" vs "It's Easier to Ask Forgiveness Than Permission". I thought this was a good blog post on the matter: http://oranlooney.com/lbyl-vs-eafp/


  • Registered Users Posts: 27,095 ✭✭✭✭GreeBo


    Sparks wrote: »
    ....everything's fine until the moment something unexpected happens or you make even the smallest of mistakes).

    In this case, someone introduces a new entrypoint OR your internals get exposed as a package/jar/library/whatever and used in all manner of other ways by new and interesting people.


  • Registered Users Posts: 20,932 ✭✭✭✭Stark


    And your code achieves sentience and becomes Skynet or <insert other unlikely event>. Is the codebase really the best place to have the fight if people in your organisation are actively looking to hack and sabotage your code? Or if it's simply a case of incompetence and too many cooks, consider a move to a services architecture, ie: instead of giving someone a jar, give them a REST service or whatever that they need to use, thus isolating the entry points in your code. If you're constantly in fire fighting mode, frantically putting in defence after defence while the project crumbles then it's probably time to step back and reevalute the architecture/team structures from a different perspective.

    ⛥ ̸̱̼̞͛̀̓̈́͘#C̶̼̭͕̎̿͝R̶̦̮̜̃̓͌O̶̬͙̓͝W̸̜̥͈̐̾͐Ṋ̵̲͔̫̽̎̚͠ͅT̸͓͒͐H̵͔͠È̶̖̳̘͍͓̂W̴̢̋̈͒͛̋I̶͕͑͠T̵̻͈̜͂̇Č̵̤̟̑̾̂̽H̸̰̺̏̓ ̴̜̗̝̱̹͛́̊̒͝⛥



  • Advertisement
  • Registered Users Posts: 27,095 ✭✭✭✭GreeBo


    Stark wrote: »
    And your code achieves sentience and becomes Skynet... Is the codebase really the best place to have the fight if people in your organisation are actively looking to hack and sabotage your code?

    Who said anything about a fight?
    Code gets re-used all the time, sometimes in-situ via a new entrypoint (that maybe have completely different rules about range of params) and sometimes its picked up and deployed somewhere else.

    If I look at a method called calculatesSomething(int a, int b) and that something is a thing that I want to calculate then I will call that method rather than create my own.

    If I see assertions guarding the params then I know how/when to use it.

    If on the otherhand something funky happens sometimes for some values I probably will discover that when I get a phone call at stupid o'clock some morning about production problems.


  • Registered Users Posts: 20,932 ✭✭✭✭Stark


    If someone is poking around in your code, does a copy and paste and then breaks something through using it incompetently, why are you the one getting the blame? Just saying it sounds like there are organisational/cultural issues at play that may need to be addressed. I think as a development team you need to set clear directives on what code is intended to be used as a service/reusable components with all the validations you expect from such code and what code is meant for private consumption. Otherwise things become like the funny Microsoft drawing that fergalr posted.

    ⛥ ̸̱̼̞͛̀̓̈́͘#C̶̼̭͕̎̿͝R̶̦̮̜̃̓͌O̶̬͙̓͝W̸̜̥͈̐̾͐Ṋ̵̲͔̫̽̎̚͠ͅT̸͓͒͐H̵͔͠È̶̖̳̘͍͓̂W̴̢̋̈͒͛̋I̶͕͑͠T̵̻͈̜͂̇Č̵̤̟̑̾̂̽H̸̰̺̏̓ ̴̜̗̝̱̹͛́̊̒͝⛥



  • Registered Users Posts: 27,095 ✭✭✭✭GreeBo


    Stark wrote: »
    If someone is poking around in your code, does a copy and paste and then breaks something through using it incompetently, why are you the one getting the blame? Just saying it sounds like there are organisational/cultural issues at play that may need to be addressed. I think as a development team you need to set clear directives on what code is intended to be used as a service/reusable components with all the validations you expect from such code and what code is meant for private consumption. Otherwise things become like the funny Microsoft drawing that fergalr posted.

    "Copy and Paste"?

    Thats not re-use, thats barely recycling!
    I'm talking about someone calling *your* code. Once your code is in the JVM and public it can be called from anywhere, I would argue that its your responsibility to make sure that it behaves as expected OR if not, it throws a defined wobbler when its not used as expected/defined.


  • Registered Users Posts: 40,038 ✭✭✭✭Sparks


    Someone accessing your library internals in an unplanned way? Why, that would never happen! Sorry Google, what did you say about undocumented DirectX calls being used all the time in games over the last decade or so? And applications doing the same thing with the windows API to the point where large amounts of code was being written with every release by microsoft to allow compatibility with those undocumented calls?


  • Closed Accounts Posts: 4,436 ✭✭✭c_man


    Am on a code review at the mo. Noticed that there's a rather large block of comments already existing in one of the files, outlining that the next block should be removed "implemented elsewhere correctly, TODO remove". The code is still being hit... I asked the reviewee about it, he didn't know. Took a look with SVN blame but damn you repo move a year ago! So the comment will stay, probably forever.


  • Registered Users Posts: 2,019 ✭✭✭Colonel Panic


    I'm doing some silly things with C++ lambdas and C style callbacks to sqllite and thought I should probably comment what I'm doing. So, I lay my commentary here for judgement. (Sorry Sparks, this is the kind of C++ perversion you'll HATE!)
    bool Database::Execute(std::string const& query, RowHandler handler)
    {
      // Create a closure that matches the callback signature sqlite3_exec wants.
      // Handler param passed as user data, so cast that to the correct type and call it.
      // Explicitly setting return type as int isn't 100% necessary as it can be deduced 
      // from return type of RowHandler.
      auto wrapper = [](void* userData, int columnCount, char **columnValues, char **columnNames) -> int 
      {
        auto handler = reinterpret_cast<RowHandler*>(userData);
        return (*handler)(columnCount, columnValues, columnNames);
      };
    
      return sqlite3_exec(db, query.c_str(), wrapper, &handler, /* error handling can wait */ nullptr);
    }
    

    Pastebin link -> http://pastebin.com/e76ikkcT


Advertisement