Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Should Robots be allowed to take life Autonomously?

  • 16-07-2013 12:18pm
    #1
    Registered Users, Registered Users 2 Posts: 1,830 ✭✭✭Be like Nutella


    This is a link to a new and important US Air Force Report which sets out a massive increase in the use of drones in the military over the next 20 years amounting to Trillions of dollars in military spending. The main thrust of this Air Force long term plan, it seems to me, is to make drones more and more autonomous over time right up to and including total autonomy!

    It is called

    ' A Vision for Air Force Science & Technology
    During 2010-2030 '

    Can be downloaded here if interested

    http://www.af.mil/information/technologyhorizons.asp

    Any opinions are welcome of course.

    I am not neutral on this, I believe strongly that humans need to retain control of kill decisions but I welcome any and all argument on the subject as I believe that only balanced open public debate about this important issue will produce responsible policy for this highly disruptive technological paradigm shift. There is an enormous amount of money and S+T invested in this area which will amount to a new 'space-race' period of technological development which will effect a whole range of industries and economic development in general due to the effect of The Law of Accelerating Returns (a Ray Kurweil concept).

    http://en.wikipedia.org/wiki/Accelerating_change#Kurzweil.27s_The_Law_of_Accelerating_Returns

    Here's a few quotes from the Report I find most interesting and alarming, I've underlined parts I regard most relative to the proposed path towards autonomy.

    Para 1 Page 60

    ' Airborne remote-piloted systems and many non-airborne systems have significant autonomous capability, however increasing levels of “flexible” autonomy will be needed for a wider range of Air Force functions during 2010-2030 and beyond. These will include fully unattended systems
    with reliable levels of autonomous functionality far greater than is possible today, as well as systems that can reliably make wide-ranging autonomous decisions at cyber speeds to allow reactions in time-critical roles far exceeding what humans can possibly achieve. '

    Para 2 Page 60

    '..in which this degree of autonomy can be varied from essentially none to near or complete autonomy...'

    Para 4 Page 60

    ' Although humans will retain control over strike decisions for the foreseeable future, far greater
    levels of autonomy will become possible by advanced technologies. '

    Para 6 Page 60

    ' Note that potential adversaries may be willing to field highly autonomous systems without anydemand for prior certifiable verification and validation. In so doing they may gain potential capability advantages that we deny ourselves by requiring a high level of V&V. In general, the enormous capabilities that such highly autonomous systems can provide are one of the main conclusions of this study. '


    So I ask... should robots be allowed to kill people by themselves?


    Here's my opinion to start things off.

    Humans MUST retain the decision to take life in war, in every instance, in order to maintain a chain of responsibility. This will make it less likely for lethal actions to be authorized because an individual or individuals could be held directly responsible for specific lethal actions.
    During the next decade the global drone industry will be worth tens of billions of dollars annually. The powerful drone industry lobby is already influencing politicians across the United States and other countries in an effort to sell more and more combat drones. Armed Drones or UCAVs have already been used to kill more than 4000 people in Pakistan, Yemen and other countries. Up to now, humans have made every decision to fire a missile from these UCAVs (in approx 450 strikes) but there are plans for completely autonomous UCAVs which would make that decision WITHOUT ANY HUMAN INPUT and that is something I feel we (as humans) should seek to avoid by passing a UN resolution which completely bans the use of 'autonomous' acts of war by its members and legally requires humans to be in the loop in every lethal action, making every decision to take life traceable back to a human actor which will make acts of war less likely in my opinion than the alternative situation where robots can legally decide to take life by themselves.
    This proposed resolution should make it compulsory that a member of the military be capable of aborting any lethal action executed by an unmanned system in any conflict - in real time - in every instance - and should apply to all manner of system types EVEN IF such an ability poses tactical difficulties not present in a totally autonomous scenario, including, in the case of self defense under International Laws of Conflict. The human in the loop should be considered legally necessary in all lethal actions.

    In my opinion no unmanned system should EVER be allowed to take life autonomously and we must take national and international legal steps to safeguard this proposed rule/law into the future right up to the level of globally ratified international treaty akin to the Laws of Conflict and International Human Rights law and/or UN resolution type agreement.

    What's your opinion? and What can or cannot be done about it?

    This is quite a broad subject and may veer off into things like visions of Terminator's SkyNet etc and that's fine with me... there's a lot of second tier effects of condoning the use of robots which can make their own 'kill decisions'.

    Right now we see drones merely as those Predators and Reapers over Pakistan firing Hellfire missiles at 'would-be' terrorists etc but military drones are in an early curve of development right now and as this report sets out - drones are and will be a central military option in all potential regional conflicts from assassinating terrorists up to and including all out war and it is in this context that I wish to start this debate here with anyone interested.

    For the uninitiated would-be drone nerds out there here is where we are at as far as autonomous drones go thus far - recently a drone in test phase called an X-47b landed on a moving aircraft carrier all by itself with no input from a human. The complexity of that achievement cannot be overstated. There are very few pilots in the world who can land a jet on a carrier.



    http://en.wikipedia.org/wiki/Northrop_Grumman_X-47B


    So what do you think?

    Should Robots be allowed to take life Autonomously?


Comments

  • Registered Users, Registered Users 2 Posts: 28,789 ✭✭✭✭ScumLord


    The thing is, I think the robots would probably be inclined to fire less. They don't hate the enemy, they won't get any enjoyment out of killing and they could be set to be less willing to fire. Computers just don't come pre programmed for violence like a living organism does, we could probably do with having a barrier like that between us and killing.

    Robots can be set to have multiple confirmations from multiple angles all at once before firing rather than making split decisions.

    It is one of those things that would be much better out of human hands. The fact is shooting a person is quite often almost as bad for the person doing the shooting. Their lives can be ruined trying to cope with taking another life or it can turn them into monsters. If the computer makes a logical decision to kill it takes the burden off the people involved. The computer will more than likely be more informed as time goes on too, it will be able to see the entire battlefield at once and can direct the enemy into losing positions that minimise casualties on both sides.


  • Registered Users, Registered Users 2 Posts: 3,457 ✭✭✭Morbert


    Should humans be allowed?


  • Registered Users, Registered Users 2 Posts: 1,830 ✭✭✭Be like Nutella


    haha

    but yes humans should be legally allowed in certain situations


  • Moderators, Recreation & Hobbies Moderators, Science, Health & Environment Moderators, Technology & Internet Moderators Posts: 93,581 Mod ✭✭✭✭Capt'n Midnight


    Personal View.
    Someone must be responsible. This is not a passive defence where people can read the warning signs and decide not to trespass onto areas where "deadly force is authorised".

    Oddly enough this is perhaps the only time "I was only following orders" would apply and kicked up the food chain.

    If a UAV wipes out a family then the person responsible for unleashing it should be up for war crimes. If the military say no, then tell them "If you don't trust it to be that reliable we don't either"


    The 1985 Chuck Norris film Code of Silence featured a REAL autonomous prowler robot. It was creepy to watch.

    more_accurate.png
    We live in a world where there are actual fleets of robot assassins patrolling the skies. At some point there, we left the present and entered the future.



    ScumLord wrote: »
    The thing is, I think the robots would probably be inclined to fire less.
    You are anthropomorphising them, it's not something they like. That prowler I mentioned was dumb, can you build a robot capable of ethics or even distinguishing non-combatants (what if someone tries to surrender ?)

    Listen, and understand. That terminator is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.


    Mod View

    This has nothing to do with Popular Science


  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,532 CMod ✭✭✭✭Black Swan


    MOD MOVING COMMENT:
    This OP had not been stated as a philosophical question. No philosophy had been stated, nor philosopher cited for support. Rather than lock the thread, it has been moved to US Politics for discussion.


  • Advertisement
  • Posts: 13,688 ✭✭✭✭ Taliyah Important Maiden


    Thanks OP, will give this a read tomorrow.


  • Closed Accounts Posts: 5,176 ✭✭✭Amerika


    Just a natural progression as we allow and rely on technology to do more of our thinking, and the sanctity of life continues to erode in society. And there will come a time when we won't even know what hit us (pun intended)

    It has become appallingly obvious that our technology has exceeded our humanity. -- Albert Einstein


  • Registered Users, Registered Users 2 Posts: 9,900 ✭✭✭InTheTrees


    The "autonomy" that the report refers to only applies to keeping the aircraft flying doesnt it? It means it can fly by itself and doesnt need a guy on the ground with a joystick. Aircraft autopilots do the same things currently.

    It doesnt mean the drone's deciding who to shoot at.


  • Registered Users, Registered Users 2 Posts: 28,789 ✭✭✭✭ScumLord


    Personal View.
    Someone must be responsible. This is not a passive defence where people can read the warning signs and decide not to trespass onto areas where "deadly force is authorised".
    The robot would need triggers to tell it to fire though. A soldier can see someone trespassing and straight away is going to be afraid that person is there to kill them. The soldier will fire prematurely out of fear. The robot will need to have a number of triggers like, proximity and actually confirming a weapon. The robot will also highlight it's presence and warn the target to leave. The soldier may hide and won't want the target to know where he is.


    You are anthropomorphising them, it's not something they like. That prowler I mentioned was dumb, can you build a robot capable of ethics or even distinguishing non-combatants (what if someone tries to surrender ?)
    I'm not anthropomorphising them, I doing the exact opposite?

    The robot doesn't need ethics, an xbox kinect can already understand human gestures, the robot will be better able to tell (possibly using multiple angles) if the person is armed and it won't get scared. It won't mind sacrificing itself on the chance it's wrong either.

    I don't see how this isn't a popular science thread? Especially since it's the Israelis working on a lot of this automated combat drones.


  • Closed Accounts Posts: 5,176 ✭✭✭Amerika


    Ir recently read that the air force already uses technology that looks for a signal identifer from a friendly approaching aircraft in combat situations. If not determined to be a friendly, then the pilot can dispatch a missile long before the other aircraft is seen or even identified as an enemy. Probably a simple "flip of the switch" to make this an automatic response in an unmanned drone.


  • Advertisement
  • Moderators, Recreation & Hobbies Moderators, Science, Health & Environment Moderators, Technology & Internet Moderators Posts: 93,581 Mod ✭✭✭✭Capt'n Midnight


    ScumLord wrote: »
    The robot would need triggers to tell it to fire though. A soldier can see someone trespassing and straight away is going to be afraid that person is there to kill them. The soldier will fire prematurely out of fear. The robot will need to have a number of triggers like, proximity and actually confirming a weapon. The robot will also highlight it's presence and warn the target to leave.
    Only if the programming works - remember Ed-209 ?

    And only in an ideal world

    This is what actually happened back in 1984 (!) and let's be clear it was build and it could be switched to autonomous and it was lethal and it's unlikely it could differentiate between friend or foe given the state of the art back then.
    http://www-cs-faculty.stanford.edu/~eroberts/cs181/projects/autonomous-weapons/articles/killer-robots.txt
    The Prowler (which stands for Programmable Robot Observer With Logical Enemy
    Response) is a mobile robotic sentry, resembling a miniature tank. In final form, the robot will be
    equipped with microcomputers, artificial intelligence software and distance-ranging sensors,
    allowing it to patrol the perimeters of an area, such as a military base or airfield, and to identify
    intruders. Purchasers will specify the sensors and weaponry they desire -- and also whether or not
    the robot will need human permission (from a remote monitoring station) before it opens fire.

    Moral Issues: The prototype Prowler is equipped with two M60 machine guns and a
    grenade launcher. "In the United States, we don't foresee the Prowler armed with lethal weapons,"
    says Walt Lee, RDS marketing coordinator. "Or, if it is,there will always be a human required in
    the loop for activation. But there are countries where there aren't the same political or moral
    considerations."



    I'm not anthropomorphising them, I doing the exact opposite?

    The robot doesn't need ethics, an xbox kinect can already understand human gestures, the robot will be better able to tell (possibly using multiple angles) if the person is armed and it won't get scared. It won't mind sacrificing itself on the chance it's wrong either.
    Again I refer you to ED-209.

    Your arguments about how they interact are kinda moot when sensors to detect motion / noise / infra-red were deemed to be good enough when they built the real prowler robot nearly 30 years ago.


    In a game 99% accuracy is fine. But remember in a game there is feedback while the player is learning the controls. In the real world 99% just isn't good enough and besides the humans encountered won't understand the rules any better than the programmers understand local cultural differences. In parts of India shaking your head means "yes"

    As for sacrificing itself that just sounds like you've bought in to the "3 Laws" which to me represents the slavery of sentient beings. Corporations and military will set the rules, if it's lethal then it's unlikely to be set to never use it's weapons.



    I don't see how this isn't a popular science thread? Especially since it's the Israelis working on a lot of this automated combat drones.
    Where is the science ?

    It's a mixture of off the shelf parts and some programming


    And it's very difficult to have a technofix for an ethical problem, expecially if insurgents / freedom fighter / gurrella / terrorists do things that are likely to result in the robot causing collateral damage.


  • Registered Users, Registered Users 2 Posts: 28,789 ✭✭✭✭ScumLord


    Only if the programming works - remember Ed-209 ?

    This is what actually happened back in 1984

    Again I refer you to ED-209.
    I see two problems with your argument, first of all ED 209 was a fictional robot. Second of all your comparing technology from 30 years ago to today. It's like comparing the Romans to the victorians, while there are some similarities there completely different technology.

    We have consumer tech now that people have turned into face hunting robots, we have learning adaptable robots now. The consumer tech is enough to make a robot that could outperform the prowler tech.


    In a game 99% accuracy is fine. But remember in a game there is feedback while the player is learning the controls. In the real world 99% just isn't good enough and besides the humans encountered won't understand the rules any better than the programmers understand local cultural differences. In parts of India shaking your head means "yes"
    99% accuracy is still 10 times better than what humans achieve. We also program in the cultural differences, if you know shaking of the head means yes then I'm sure the people programing can find that out too. Robots like these take years of development, they won't just throw one together and let it loose. There's plenty of time (and probably Indian programmers working in the industry) to find kinks and program solutions to problems that arise should the robot have problems with identification.

    The robot also has the advantage of having many eyes and sensors pointing at the same problem from many different angles, a few drones overhead, a radar station, a lidar station they can see the entire battlefield at once and are not stuck to one visual point of reference.
    As for sacrificing itself that just sounds like you've bought in to the "3 Laws" which to me represents the slavery of sentient beings. Corporations and military will set the rules, if it's lethal then it's unlikely to be set to never use it's weapons.
    My point is with a robot if it doesn't get the necessary triggers to fire the target person could walk right up to it. If the person is a bad guy and blows up the robot it's not the end of the world. The military can risk the loss of a robot easier than they can risk the loss of a person. People would still be the most expensive piece of machinery on the battlefield.



    Where is the science ?

    It's a mixture of off the shelf parts and some programming
    It's not a mixture of off the shelf parts at all. You're being pretty dismissive of the development of artificial intelligence. It ties into so many areas of cutting edge technology. At least magazines like focus and Scientific American seem to think it's popular science, they've run a number of articles on it.

    These are somewhat self aware robots that can recognise themselves. other robots and learn from each other to develop their own primitive language.


    And it's very difficult to have a technofix for an ethical problem, expecially if insurgents / freedom fighter / gurrella / terrorists do things that are likely to result in the robot causing collateral damage.
    Like soldiers they will want permission to fire, the ethical dilemmas still rest with humans. The robot is just a very accurate gun that removes the need for skill. We still have to decide whether it's right or not to kill the person at the other end and poor programming is the same as poor tactics.

    But over all as the American military say war isn't supposed to be fair. If the military are involved people are going to die, that's the job of the military. I think overall robots will help reduce unnecessary humans deaths rather than cause more.

    I think these are going to be the deadliest robots of the future.
    Stick some guns on them things and everybodies fecked.


  • Moderators, Recreation & Hobbies Moderators, Science, Health & Environment Moderators, Technology & Internet Moderators Posts: 93,581 Mod ✭✭✭✭Capt'n Midnight


    ScumLord wrote: »
    I see two problems with your argument, first of all ED 209 was a fictional robot. Second of all your comparing technology from 30 years ago to today. It's like comparing the Romans to the victorians, while there are some similarities there completely different technology.
    That's one of the purposes of SciFi to promote discussion of these things in advance.

    Let's imagine a real ED-209. Who would be responsible for the deaths ?
    It's not an industrial accident because it's a device designed to kill.



    99% accuracy is still 10 times better than what humans achieve. We also program in the cultural differences, if you know shaking of the head means yes then I'm sure the people programing can find that out too. Robots like these take years of development, they won't just throw one together and let it loose. There's plenty of time (and probably Indian programmers working in the industry) to find kinks and program solutions to problems that arise should the robot have problems with identification.
    You are assuming they do it properly and don't cut any corners and get all the correct assumptions first time.

    It won't take years. Google already have autonomous cars, just bolt a sentry gun to the roof rack add a subroutine to the existing control system and you're good to go.

    The robot is just a very accurate gun that removes the need for skill. We still have to decide whether it's right or not to kill the person at the other end and poor programming is the same as poor tactics.
    If the robot is allowed to take life autonomously then it's far more than a gun.


    But over all as the American military say war isn't supposed to be fair. If the military are involved people are going to die, that's the job of the military. I think overall robots will help reduce unnecessary humans deaths rather than cause more.
    If by unnecessary you mean American lives then yes, at first anyway, until the locals get upset about unaccountable robots killing them.

    Two or three thousand people have been killed by drones in parts of Pakistan already but so far humans have pushed the button and could be held accountable.


    We're possibly heading back to the idea that the RAF could police Iraq for a fraction of the cost of the army because they could gas the kurds.

    I think these are going to be the deadliest robots of the future.
    Stick some guns on them things and everybodies fecked.
    A predator drone can carry the same payload as twin engined WWII bomber (Somewhere between a B25 and a Wellington) and stay in the air for more than 20 minutes


  • Registered Users, Registered Users 2 Posts: 28,789 ✭✭✭✭ScumLord


    That's one of the purposes of SciFi to promote discussion of these things in advance.

    Let's imagine a real ED-209. Who would be responsible for the deaths ?
    It's not an industrial accident because it's a device designed to kill.
    There would be a number of people responsible, The main person be it director or president is always going to be responsible for the actions and mistakes made by those under him. If the machine does it's job as directed then the people giving the orders are responsible, just like they're responsible for the killings human soldiers do. If the machine makes a mistake the blame also falls to the engineers that didn't foresee the problem.



    You are assuming they do it properly and don't cut any corners and get all the correct assumptions first time.
    Corners don't get cut as much in these kind of industries anymore, the introduction of standards and common work practices reduce errors. We've been manufacturing for a long time. We know how to do it properly while reducing mistakes. One of my roles is as a quality control auditor, and we go through a lot to maintain quality in our little factory, what they go through in larger scale productions is even more rigorous. They have to be that rigorous just to get large scale projects progressing at all.

    It won't take years. Google already have autonomous cars, just bolt a sentry gun to the roof rack add a subroutine to the existing control system and you're good to go.
    It's just not that easy. No machine works as expected the first, second, third, etc.. etc.. time..

    If the robot is allowed to take life autonomously then it's far more than a gun.
    It's not really, it's just following the same procedures it would if there was a human in charge. Machines don't make decisions, they respond to triggers with a predetermined action.


    If by unnecessary you mean American lives then yes, at first anyway, until the locals get upset about unaccountable robots killing them.
    No I mean all life, if the robot can determine treats better and has the opportunity to strike at more opportune times it will help reduce all collateral damage.
    A predator drone can carry the same payload as twin engined WWII bomber (Somewhere between a B25 and a Wellington) and stay in the air for more than 20 minutes
    Drones will become redundant to a large extent I'd say. The quadrocopters will act in a swarm and will be able to enter a building and find individuals at a relatively low cost compared to current drones.


  • Moderators, Recreation & Hobbies Moderators, Science, Health & Environment Moderators, Technology & Internet Moderators Posts: 93,581 Mod ✭✭✭✭Capt'n Midnight


    Just a reminder. You are describing how things should be done properly. I've described what was actually done. And the best predictor for the future is the past.


    ScumLord wrote: »
    There would be a number of people responsible,
    If everyone is responsible then no one is responsible.
    The main person be it director or president is always going to be responsible for the actions and mistakes made by those under him. If the machine does it's job as directed then the people giving the orders are responsible, just like they're responsible for the killings human soldiers do. If the machine makes a mistake the blame also falls to the engineers that didn't foresee the problem.
    Ah the good old blame game the buck gets passed.


    Corners don't get cut as much in these kind of industries anymore, the introduction of standards and common work practices reduce errors. We've been manufacturing for a long time. We know how to do it properly while reducing mistakes.
    Capacitor plague, tin whiskers, purple plague, batteries that catch fire. Corners are cut all the time.
    Weighing a new model of aircraft for the first time is always scary.


    It's just not that easy. No machine works as expected the first, second, third, etc.. etc.. time..
    qft.

    Drones will become redundant to a large extent I'd say. The quadrocopters will act in a swarm and will be able to enter a building and find individuals at a relatively low cost compared to current drones.
    Drones can stay aloft for up to 36 hours.


  • Registered Users, Registered Users 2 Posts: 6,696 ✭✭✭Jonny7


    Whether they fire a cruise missile or use a drone - it's the same principle, however a drone is generally much more accurate.

    Neither are autonomous


  • Moderators, Recreation & Hobbies Moderators, Science, Health & Environment Moderators, Technology & Internet Moderators Posts: 93,581 Mod ✭✭✭✭Capt'n Midnight


    Jonny7 wrote: »
    Whether they fire a cruise missile
    For a second I thought you were Johnny 5 :pac:

    But yeah when you think about it a cruise missile is programmed to fly to X and kill everything there. But of course a human picks spot marked X.


    The horrible film stealth (it was awful) had the UAV "decide" to continue with a mission because it was OK with the level of collateral damage


  • Registered Users, Registered Users 2 Posts: 6,696 ✭✭✭Jonny7


    In fact a drone is much more effective than a cruise missile because it has a human operator to control it, and make a better judgement

    Come to think of it, it's much better than conventional artillery, dumb and laser-guided bombs and missiles in almost every respect.


  • Registered Users, Registered Users 2 Posts: 28,789 ✭✭✭✭ScumLord


    Just a reminder. You are describing how things should be done properly. I've described what was actually done. And the best predictor for the future is the past.
    I don't think you are describing what is done. Modern R&D along with modern manufacturing just don't make the mistakes you're talking about anymore. Developers have a rule 10-100-1000. The scale of how expensive is is to fix problems during development-testing-after launch. Any company that has suffered at the hand of corner cutting isn't going to be able to afford to go through that cost and reputation damage again. It's just a poor comparison to compare 2013 manufacturing to 1980s manufacturing, they're a completely different kettle of fish.

    Much of the US militaries development is outsourced too so they don't have to worry about corner cutting just find a company that has all the procedures in place and get them to do the work to a certified standard.

    If everyone is responsible then no one is responsible.
    I never said no one is responsible but the responsibility would depend on what went wrong. Another mechanism of modern manufacturing is to not start throwing around blame, it's to find the actual cause of the fault and make sure it's rectified so it can't happen again. It's not always someone's fault when things go horribly wrong.



    Capacitor plague, tin whiskers, purple plague, batteries that catch fire. Corners are cut all the time.
    Corners are cut in the consumer market all the time, they're trying to reduce costs and there's very little comeback from the customers anyway so they can cut corners and get away with it. The same can't be said for military market. If you want to get the next contract the current one better work on budget.


  • Moderators, Recreation & Hobbies Moderators, Science, Health & Environment Moderators, Technology & Internet Moderators Posts: 93,581 Mod ✭✭✭✭Capt'n Midnight


    ScumLord wrote: »
    The same can't be said for military market. If you want to get the next contract the current one better work on budget.
    :eek:

    I'd nearly challenge you to name one big ticket military software or hardware that came in on time and on budget

    Teething problems that take years to resolve are the norm.

    And there are numerous weapons systems that have failed to deliver.

    A classic case from the past was the Patriot Missile trying to intercept Scud's underground based on the projected paths. Look at the Littoral combat ships or the joint strike fighter or the Future Combat System or the F22 or F35.

    The USS Gerald R. Ford is about $2.8 Bn over budget. The UK carriers are up to £1.6 Bn over budget depending on which changes are made / which aircraft they buy.

    http://www.guardian.co.uk/uk/2013/may/10/navy-jets-cant-land-hot-weather


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 28,789 ✭✭✭✭ScumLord


    :eek:

    I'd nearly challenge you to name one big ticket military software or hardware that came in on time and on budget
    I'm not an avid follower of military spending but I'm sure plenty do, they may have hundreds of projects ranging from huge projects like aircraft carriers to smaller budgets that deal with more transportable food containers.
    Teething problems that take years to resolve are the norm.
    In many industries.
    And there are numerous weapons systems that have failed to deliver.
    Are these research projects? I'm sure the military look at every failed project as a lesson learned. You can't know if something is going to work until you make prototypes and even if the prototype works there's nothing to say it will be possible to manufacture the product for a reasonable cost.

    The USS Gerald R. Ford is about $2.8 Bn over budget. The UK carriers are up to £1.6 Bn over budget depending on which changes are made / which aircraft they buy.
    As far as I can see the Ford is only being delayed by 3 months which is not a long period of time for such a large project and the delays are due to new technology which again wouldn't be all that surprising. Budgets and end dates are best estimates, most people would expect something to come along and change those numbers. If the UK military want to go changing planes and equipment halfway through a build it's going to cost, that's unavoidable.

    When your building large scale projects like these that are essentially untested, bespoke designs delays are inevitable. I'm sure if you compared the commercial shipbuilding market to the military one you'd probably see the same delays across large tankers and passenger ships.


  • Moderators, Recreation & Hobbies Moderators, Science, Health & Environment Moderators, Technology & Internet Moderators Posts: 93,581 Mod ✭✭✭✭Capt'n Midnight


    ScumLord wrote: »
    As far as I can see the Ford is only being delayed by 3 months which is not a long period of time for such a large project and the delays are due to new technology which again wouldn't be all that surprising.
    Original due for 2015 it's slipped back to November 2016

    And I'm glad to see you acknowledge that new technology frequently has problems.


  • Registered Users, Registered Users 2 Posts: 28,789 ✭✭✭✭ScumLord


    Original due for 2015 it's slipped back to November 2016
    That's active service. It will be completed before that and seems to be undergoing a testing phase. The new technology probably extended that testing phase. Sure as long as it's ready before the next war it'll be ok.
    And I'm glad to see you acknowledge that new technology frequently has problems.
    Problems are unavoidable but the way we deal with them has changed completely. In the 80s the employee got the blame. Today the process is investigated to try to prevent problems from occurring.


Advertisement