Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Web development and hosting, common catastrophies and fail safes

  • 01-03-2015 9:20pm
    #1
    Registered Users, Registered Users 2 Posts: 1,206 ✭✭✭


    Im looking for advice from people who have a considerable amount of web development experience (many years) on the common practices, catastrophes they've witnessed and means of preventing them.

    I'm in a position now that I am ultimately responsible for this stuff and am concerned that there are certain things that I'm either unaware of , or don't have a good policy implemented to ensure nothing disastrous happens.

    Id also love to hear if you've anything to offer with my own list.

    So far I have come across the following things in workplaces that have had huge impacts on the client.

    1. Robots.txt that disallowed crawling accidentally copied into the production server, destroying years of SEO work by the client whos entire business was relying on search rankings. Im unsure what they did afterwards, but I do know its possible to rebuild your rankings with alot of effort pretty quickly.

    Possible Solutions:

    - ensure its ignored in git
    - have a script running every morning that checks all sites we've worked on or are responsible for hosting that alerts us if it contains "disallow" or something to that effect.

    2. (Wordpress only), option to discourage search engines from crawling resulting in the same effect as number 1.

    Possible Solution:

    - write a plugin that checks this setting and sends an alert and have it installed on every site (seems messy though)

    3. Domain hosting issue: Something strange happened recently where a domain we are responsible for (it was transferred to us) stopped working. I cant really explain what happened only that the DNS records stopped propagating. The site and emails went down resulting in panic calls from the client. They are a very established shop with ecommerce, so this was pretty bad.

    In fairness they had it working within the hour.

    The provider didnt really give an explanation for this but did admit that it was nothing we did or could have avoided

    Possible solution:

    - learn about hosting domains and take responsibility for this instead of outsourcing.

    4. In my old job one site got hacked but because the company were hosting many sites on the one server this had an impact on many clients.

    Solution we've implemented already
    - All sites have their own VPS with its own security set up.

    If you've anything to add Id be grateful!

    Thank you


Comments

  • Registered Users, Registered Users 2 Posts: 7,521 ✭✭✭jmcc


    There's one important point missing: Backups.

    Regards...jmcc


  • Registered Users, Registered Users 2 Posts: 586 ✭✭✭Aswerty


    jmcc wrote: »
    There's one important point missing: Backups.

    Regards...jmcc

    And just to elaborate on that. Backups are part of a backup strategy which is part of an overall business continuity strategy. Every business needs to have safeguards in place so that it can always get back up to a stable state within a reasonable time frame of **** hitting the fan. If this isn't pushed from on high then you know your management isn't up to scratch.

    One thing for the OP. Make sure if you get hit by a bus this evening, or less drastic – win the lotto, that someone can easily slot into your role tomorrow. This means ensuring colleagues know what you do and have documentation for specific information that it wouldn't be reasonable for anyone else to know to the degree you do.

    One thing I found extremely useful in a previous role where I built and managed a .NET web application was an error handling facility called ELMAH. It allowed me to setup automatic notification when errors in the web applications were not handled gracefully. I found it to be an invaluable tool. I used it in conjunction with pingdom. With both these tool I had a real time notification system for everything that went wrong at the application and hosting level. We used a VPS per application approach similar to yourself at the infrastructure level.


  • Technology & Internet Moderators Posts: 28,830 Mod ✭✭✭✭oscarBravo


    Aswerty wrote: »
    And just to elaborate on that. Backups are part of a backup strategy which is part of an overall business continuity strategy.

    It's amazing how many backup strategies don't include a restore strategy.


  • Registered Users, Registered Users 2 Posts: 2,781 ✭✭✭amen


    Hi OP,

    I noticed your post yesterday and I've been thinking about what you wrote.

    While your scripts/jobs are all well and good, the jobs are are reactive instead if proactive/preventative.

    From reading your issues it appears that you/your company has
    1: No proper automated build/release procedure
    2: No test environment where a build is tested before release
    3: No configuration management. Should be tied to your build/release which would allow you to have different configurations for Test/Release
    4: No change management log which would identify who changed what on Live and when/why. There should be no changes on Live unless it is via a release that has gone through the above steps.
    5: No Backup/Restore strategy
    6: No redundancy
    7: Not sure if you are user source code repository GIT/TFS/Subversion etc

    I sound harsh but unless you have a solid stable test/release mechanism you are always going to have the problems you described.


  • Registered Users, Registered Users 2 Posts: 1,206 ✭✭✭zig


    Hi Amen,

    Not harsh at all, I asked so I could hear this stuff. Now that said the issues above arose in previous places so I've been trying my best to implement better processes to not allow what went on before

    Its a decent list you have there, some of which I've implemented, some of which Ive not so I may pick your brain a bit later about advice on some things?

    Cheers


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 2,781 ✭✭✭amen


    feel free to post here. I'm sure you will get loads of responses.

    Not all will agree but loads of view points are good


  • Registered Users, Registered Users 2 Posts: 249 ✭✭gargargar


    Whitelists for remote server access using SSH keys. Some people think it is enough to have a valid key. Also private keys should be password protected.


  • Registered Users, Registered Users 2 Posts: 1,206 ✭✭✭zig


    Hi Amen, do you mind elaborating on what kind of redundancy plans you would be considering (or maybe links to good articles on the subject specific to web development). Im just trying to work out the viability of it really.

    Cheers


Advertisement