Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Making the move from academia to the private sector

Options
2»

Comments

  • Registered Users Posts: 13,104 ✭✭✭✭djpbarry


    14ned wrote: »
    I tried to reply to the OP that in my experience such a bias against him might apply here because of his career background.
    Whoops. ;)
    14ned wrote: »
    She mentioned she's on some advisory board for some UK scientific funding agency as apparently the lack of this stuff causes significant auditing problems when there is a failure to replicate results because it's not possible to track down who "broke the software", and apparently some want to enforce the use of a centralised publicly operated source control repo for all publicly funded science where software must be written, and mandate that software implement regression test suites so it becomes impossible to change the software in a result invalidating way. Apparently this notion is extremely controversial, and apparently academics went all up in arms about the idea, and it all got very heated.

    But I'm very much speaking from hearsay and possibly faulty memory now. I have no idea if any of the stuff above is true, but it sounds plausible.
    Well, I work in academia in the UK and I’ve heard absolutely nothing about the above, so I’d be inclined to take it with a pinch of salt. Most academic programmers I know would be only too happy to see more oversight when it comes to developing “academic” software – it’s all too easy to get a paper published, in high-profile journals, with virtually no review or testing of any software that was developed to produce results in said paper (in the life sciences at least).
    14ned wrote: »
    We're getting quite far off the OP's topic now…
    Not at all – it’s an interesting discussion.


  • Registered Users Posts: 13,104 ✭✭✭✭djpbarry


    beauf wrote: »
    Doing a few interviews would be a learning experience that might prompt areas to study.
    Yep, that’s definitely already happened! Had a technical interview that was somewhat soul-destroying, but it was useful in the sense that it highlighted areas I need to work on.


  • Closed Accounts Posts: 22,651 ✭✭✭✭beauf


    Its like a free career guidance.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    14ned wrote: »
    I'd actually disagree :)

    Much if not most of any specialist knowledge field is a sort of bubble created both collectively by the practitioners and individually by the expert. In many ways, the more expert the engineer, the more what they think is true is make believe with no basis in evidence. Otherwise they wouldn't be an expert pushing the boundaries of what is possible.

    As with any cultural ecosystem, there are a set of widely held biases and prejudices on various topics. For example, most C++ experts dislike Ruby often quite violently. They might have once been able to say why in detail, but after a few years the detail fades and it just becomes a widely held belief which is uncontroversial in C++ circles. Similarly, most C++ experts think Haskell and Python are great, but again often for quite woolly reasons.

    The same thing applies for hiring decisions. Certain universities, genders, nationalities and career backgrounds get preferred over others. It's rare that these biases are publicly mentioned, even in a spoken aside during a meeting, but everyone knows it happens. I tried to reply to the OP that in my experience such a bias against him might apply here because of his career background.

    I'm not disputing that bias exists, and that hiring committees have biases.

    I'm just saying that it's the job of the hiring committee to be unbiased, and the better they are, the better they should be at that job, and the less biased they should be.

    To clarify:
    Each candidate has some quality q
    You can't measure that directly, so you look for signals that reveal information about it.
    (The grade of their schoolwork, their performance in the coding interview etc)
    You try and form some estimate e(q) from the signals you can see.

    The better the hiring committee is at their job, the less the gap (the error) between e(q) and q - that's all I'm saying. If the hiring committee is really bad, they think that certain signals carry much more information about q than they actually do, which increases the error.

    14ned wrote: »
    In many ways, the more expert the engineer, the more what they think is true is make believe with no basis in evidence.

    Maybe you are not making a statement about how the world should be - maybe you are just saying "I have noticed empirically that, sadly, hiring committees tend to be biased, and that more expert committees more biased."

    If so, then they aren't likely to be *that* good, in my opinion. They lack good metacognitive skills - poor self-awareness. If these are the people on your hiring committee, someone is doing something wrong.


    But fair enough if you are not advocating or defending this state of affairs, just describing it.
    Maybe I'm too optimistic.

    14ned wrote: »
    Thing is, they on average apparently don't. And that isn't me claiming this, Marian said so, and that's her research field so she is genuinely an expert on this. She mentioned she's on some advisory board for some UK scientific funding agency as apparently the lack of this stuff causes significant auditing problems when there is a failure to replicate results because it's not possible to track down who "broke the software", and apparently some want to enforce the use of a centralised publicly operated source control repo for all publicly funded science where software must be written, and mandate that software implement regression test suites so it becomes impossible to change the software in a result invalidating way. Apparently this notion is extremely controversial, and apparently academics went all up in arms about the idea, and it all got very heated.

    But I'm very much speaking from hearsay and possibly faulty memory now. I have no idea if any of the stuff above is true, but it sounds plausible.



    We're getting quite far off the OP's topic now, and again I stress I know very little personally about academic programming. But perhaps the above repeated hearsay might clarify how the tradeoffs chosen by academic programmers may benefit them but not their scientific field at large.


    There is not an invalid argument that a large and increasing chunk of the knowledge industry is really some variant of software engineering.

    For example, most of finance nowadays is just moving numbers between storage silos. One could argue trading is just a stylised memcpy() and you wouldn't be entirely wrong.

    That's taking things too far - there is a domain complexity you aren't going to see by thinking about memcpy. That's a bit like saying all programming is just ones and zeros. It might be true, at a very low level, but is not useful - you're not going to build very much with that view.


    14ned wrote: »
    Similar arguments can be made for logistics, a fair chunk of design and art, even music creation nowadays mostly happens inside a computer. Back when I was working on EuroFighter, everybody thought they were building a plane. In fact 70% of the effort went into a software platform. The hardware plane itself became relatively incidental in the big picture of delivering the project.

    Now, that is perspective I very much agree with!
    I read this article recently:

    Although the JPO has given the new 3i software the green light, test pilots at Edwards Air Force Base, California, are still seeing frequent stability issues on the ground. Officials told Defense News last week that development test pilots at Edwards have trouble booting up their jets about once every three flights.
    http://www.defensenews.com/story/defense/air-space/2016/05/09/f-35-program-office-signs-off-air-force-3i-software/84138390/


    I thought it was funny, the world we are now living in.


    I still disagree about academic programming.
    My disagreement is to do with the separability of scientific inquiries:
    you are clearly thinking about big systems projects - things like fighter plane software, or maybe the LHC - giant academic or industrial software systems projects.

    But a lot of academic work has the opposite character - tons of people can do research PhDs, especially in fields such as computer science or mathematics, where they work very independently, and the software they write is very small and self-contained - algorithmic analysis, proof of concepts, data analysis. They use platforms, but are productive working without large teams or software engineering methodology.


    14ned wrote: »
    I'd argue it's not much difference in academia. Researchers might think they are researching some physical phenomenon or other, but in terms of effort invested I'd betcha an increasing share is going into the computer systems underpinning that research. That makes engineering some software unavoidable, so one could argue it might be wise that everyone thinking they are doing task X accepts it's really a variant of programming a computer.

    Computers are becoming a more and more ubiquitous and necessary tool.
    However, software and environments are increasing in power and abstraction all the time.

    As such, I did some research in my PhD working solo (which meant I did not need to use much software engineering) because I could use modern tools, which if I was to do it 40 years ago using assembly language, would have been a large team project.

    The scale of research we want to do is increasing - but the tooling is increasing in sophistication too, enabling individuals to do more independently.

    It's not clear to me that the future is that every research project is a giant collaboration - individual researchers, and small informally manageable teams, are now probably more productive than ever.
    14ned wrote: »
    The theory is that using a library like Hana makes most of the yuck go away :). I'm not personally convinced of that yet, but I do think Eric's Ranges v2 a.k.a. the expected v2 of the C++ STL ought to be eventually fairly yuck free around 2020 or so.

    Regarding being a poor choice of example, sure I was limited by the cases I knew the facts of and where I was sure the example didn't mind being named. Had I had more free reign I could have done a lot better.

    Niall

    Time will tell - colour me sceptical - I had a pretty bad experience with the boost graph library a few years ago, that made me think that C++ with its heavy use of templating, had gone down a dead-end.
    (Having previously been a happy STL user.)

    Python and friends are the near-term future!
    (Medium-term I'm not sure - I'm pretty sure we will be back to static typing - but I don't think it will be C++)


  • Registered Users Posts: 768 ✭✭✭14ned


    fergalr wrote: »
    Maybe you are not making a statement about how the world should be - maybe you are just saying "I have noticed empirically that, sadly, hiring committees tend to be biased, and that more expert committees more biased."

    I never said more expert committees are more biased. If anything, typically truly expert people only bother turning up at hiring committees in order to get someone particular hired whom they are fairly sure wouldn't be otherwise. Hiring committees also vary enormously between orgs, what Microsoft does is utterly different to Google for example. Each has its pros and cons.

    My sweeping hand waving generalisation about biases is part based on personal experience, but much more based on conversations with senior Googlers who set up and repeatedly reformed the Google hiring process until they reach a perceived "good enough" process.

    Interestingly, they really really tried hard to make Google's process objective and meritocratic as possible, and the ways in which they failed and all the unintended consequences they found according to their own internal metrics would make a fascinating book.

    But it's all uncorroborated hearsay if I speak it. Let's just say they failed to fix the bias in favour of Ivy League universities and the bias against women and some, but not all, minorities. This was despite very strenuous internal efforts, but in the end there is so far you can push a population of humans all of whom are behaving mostly subconsciously and quite irrationally (and get upset and offended if you push too hard the empirical evidence that they, in aggregate, are not being rational statistically speaking)
    If so, then they aren't likely to be *that* good, in my opinion. They lack good metacognitive skills - poor self-awareness. If these are the people on your hiring committee, someone is doing something wrong.

    Nobody anywhere I've ever worked likes hiring duties. Interviewing people is enormously disruptive to your schedule, and it's not pleasant rejecting often very nice human beings. It's natural to want to avoid those duties if you can, and sufficient seniority usually gives you to clout to make excuses to get out of it.
    Time will tell - colour me sceptical - I had a pretty bad experience with the boost graph library a few years ago, that made me think that C++ with its heavy use of templating, had gone down a dead-end.
    (Having previously been a happy STL user.)

    You may be interested to know that recent Boost peer reviews are very hard on any "frivolous" usage of metaprogramming in new libraries. You need to justify with ample rationale why it is absolutely necessary.

    In other words, the pendulum has swung back. Back when it was new and cool everyone pushed it too far and made anti-social libraries. Recent C++ 14 only libraries go to great lengths to not expose metaprogramming internals to end users. This will get a lot better again once C++ 17 Concepts land next year.
    Python and friends are the near-term future!
    (Medium-term I'm not sure - I'm pretty sure we will be back to static typing - but I don't think it will be C++)

    STL v2 as currently expected is based on coroutines, generators, list comprehensions and algorithmic views of things. Everything can also be lazily executed, so for example you can apply a sorted view onto some partial range of stuff and the sorting only happens incrementally as needed. Coroutines also happily migrate onto kernel threads if asked, so it becomes a cinch to scale out processing over SMP CPU cores.

    It looks a lot like Python and intentionally so, but has the advantage of all being compiled statically so it is rather quick.

    Niall


  • Advertisement
Advertisement