Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Self driving buses, trains, trucks etc

Options
1101113151633

Comments

  • Registered Users Posts: 19,747 ✭✭✭✭cnocbui


    Ok, I take it back, the dollars are all that matter and the fact they can't write the code is of no consequence. You win, I concede.


  • Registered Users Posts: 33,672 ✭✭✭✭listermint


    So Uber moved from multiple Lidar sensors down to 1

    https://www.engadget.com/2018/03/28/uber-reduced-safety-sensors-on-its-autonomous-cars/

    There you have it, company that wants to pay its workers pittance to eventually substitute them with AI then wants to basically feed the AI pittance

    Leopard cant change its spots it seems.


  • Registered Users Posts: 5,813 ✭✭✭Cordell


    cnocbui wrote: »
    AlphaGo required 1202 processors and 176 GPUs
    Really practical and cheap to fit in a car.

    You don't need them in cars, you only need them in datacenters to train the pattern recognition/neural network/deep learning/so on algorithms. Training and running of these algorithms are highly asymmetrical processes from resources point of view. As a practical example, the ubiquitous face detection algorithms that every phone can effortlessly run in milliseconds can take hours, even days, to train and tune on powerful computers.
    cnocbui wrote: »
    One day car driving AI's may well match humans, but I think that is further away than estimated and I think it unethical to kill a lot of people to get there.

    They will save a lot more. That is the ethical question: save hundreds of thousands (or millions, there are over 1 million road traffic deaths annually) by killing a few (exclusively due to technical faults). Can anyone make this decision?


  • Registered Users Posts: 19,747 ✭✭✭✭cnocbui


    Cordell wrote: »
    You don't need them in cars, you only need them in datacenters to train the pattern recognition/neural network/deep learning/so on algorithms. Training and running of these algorithms are highly asymmetrical processes from resources point of view. As a practical example, the ubiquitous face detection algorithms that every phone can effortlessly run in milliseconds can take hours, even days, to train and tune on powerful computers.



    They will save a lot more. That is the ethical question: save hundreds of thousands (or millions, there are over 1 million road traffic deaths annually) by killing a few (exclusively due to technical faults). Can anyone make this decision?

    How do you know AI drivers will save lives? How is that unquestioning statement of 'fact' any different to the vastly overstated promises regarding the F-35s capabilities? I don't think people should be making a decision that X deaths are acceptable in the 'hope' that better performance is achieved.


  • Registered Users Posts: 5,813 ✭✭✭Cordell


    I assume that 99.99 of road traffic accidents will be avoided. Car to car collision will almost never happen if both cars are autonomous. Cars running over pedestrians and cyclist, same - yeah, I choose to ignore the Uber incident for this argument. Drunk/sick/tired/incapacitated driver loosing control - no such thing.

    Almost any technology was introduced with what was regarded as an acceptable risk of loss of life - "regular" cars kill people, airplanes kill people, even us, here, using electricity to have a chat, we pollute the planet by doing so.
    When a new driver get his licence there is an assumed and accepted risk that he may kill someone while driving.
    Closing an hospital for financial reasons means some people that could be saved will die while onroute to the other hospital.

    So you see, this kind of decisions is nothing new.


  • Advertisement
  • Registered Users Posts: 19,747 ✭✭✭✭cnocbui


    Cordell wrote: »
    I assume that 99.99 of road traffic accidents will be avoided. Car to car collision will almost never happen if both cars are autonomous. Cars running over pedestrians and cyclist, same - yeah, I choose to ignore the Uber incident for this argument. Drunk/sick/tired/incapacitated driver loosing control - no such thing.

    Almost any technology was introduced with what was regarded as an acceptable risk of loss of life - "regular" cars kill people, airplanes kill people, even us, here, using electricity to have a chat, we pollute the planet by doing so.
    When a new driver get his licence there is an assumed and accepted risk that he may kill someone while driving.
    Closing an hospital for financial reasons means some people that could be saved will die while onroute to the other hospital.

    So you see, this kind of decisions is nothing new.


    I have a completely unhackable OS to sell you. I know someone who has a Phd in electrical engineering and teaches the subject. One thing I have heard them say on several occasions is 'never assume anything'.


  • Registered Users Posts: 5,813 ✭✭✭Cordell


    Of course, when you design it, but in our conversation is fine. But let's dial it back a little:
    1000 road deaths today, this technology can save 500 and kill an extra 50. Grand total 450 lives saved. Extremely conservative estimates.
    Do you thing this decision will not be made? It is made already, on a daily basis. Just not presented as such.


  • Moderators, Category Moderators, Arts Moderators, Sports Moderators Posts: 48,459 CMod ✭✭✭✭magicbastarder


    https://www.theregister.co.uk/2018/03/28/uber_selfdriving_death_may_have_been_due_to_lidar_blind_spot/

    TL;DR - uber reduced the number of LIDAR sensors from five to one when they rolled out the system on the volvo.


  • Registered Users Posts: 5,813 ✭✭✭Cordell


    I don't think that was the cause. Even with one lidar it still had enough sources of information, i.e. cameras, front radar, to detect the pedestrian. If I am to speculate, the computer system failed (a catastrophic failure like a total crash or reboot) and failed to warn the driver as well - this is a requirement of the applicable standards, the system ca fail but it has to warn about the failure in due time.


  • Moderators, Category Moderators, Arts Moderators, Sports Moderators Posts: 48,459 CMod ✭✭✭✭magicbastarder


    yeah, seems to be one hell of a blind spot if it's directly in front of the car.

    seems there was an unfortunately friendly relationship between the arizona governor and uber (this is linked in the article above):
    https://www.theguardian.com/technology/2018/mar/28/uber-arizona-secret-self-driving-program-governor-doug-ducey


  • Advertisement
  • Registered Users Posts: 5,813 ✭✭✭Cordell


    That's a different angle, it is known that Uber likes to befriend authorities, especially when it comes to their main bossiness model, that is, unlicensed taxi service :)


  • Registered Users Posts: 19,747 ✭✭✭✭cnocbui


    Turns out this recent second Tesla fatality is another instance where the autopilot was engaged. Tesla don't know why the car didn't detect the solid concrete divider.


  • Closed Accounts Posts: 345 ✭✭bebeman


    Safe to say bus drivers jobs are safe and won't be replaced for decades.


  • Closed Accounts Posts: 3,478 ✭✭✭eeguy


    bebeman wrote: »
    Safe to say bus drivers jobs are safe and won't be replaced for decades.

    There are about a dozen autonomous bus services around Europe. They're not large or fast, but definitely some jobs lost so far.

    I wouldn't look to Tesla as the Pinnacle of autonomous development. Autopilot is just a fancy feature. Uber are rushing out of the gate with half baked technology.

    Wait to see what Google, Renault and most established players are at before you make a judgement. These guys aren't out with a fanfare, they're just quietly developing in the background.


  • Registered Users Posts: 5,813 ✭✭✭Cordell


    bebeman wrote: »
    Safe to say bus drivers jobs are safe and won't be replaced for decades.

    yeah, about one until it starts. And that's being conservative.


  • Registered Users Posts: 1,387 ✭✭✭brokenarms


    bebeman wrote: »
    Safe to say bus drivers jobs are safe and won't be replaced for decades.

    Just imagine how long it would take one to pull out of a stop in Dame Street. I would guess 30 mins. If they got a sleeping taxi man.


  • Closed Accounts Posts: 345 ✭✭bebeman


    As more jobs become automated, the ones that cant be automated will increase in desirability, at the moment no child dreams of becoming a bus driver, but give it a few years.
    All non safety critical jobs will be replaced by robots, those that the public feel the need to have a human hand in control will be premium jobs.
    As long as we hear about Automatic vehicles causing injury/death, human drivers have job security.
    It will be decades before this changes.


  • Posts: 0 [Deleted User]


    What is more likely to happen is that the insurance industry will be the main driver in the conversion to autonomous transport once level 5 becomes a mainstream option

    You want to control the vehicle, expect to pay a massive premium. Let the vehicle drive itself, pay a far smaller premium.

    The simple fact is that the main cause of traffic incidents is the human factor. Take that out of the equation and watch road collisions fall drastically


  • Registered Users Posts: 19,747 ✭✭✭✭cnocbui


    What is more likely to happen is that the insurance industry will be the main driver in the conversion to autonomous transport once level 5 becomes a mainstream option

    You want to control the vehicle, expect to pay a massive premium. Let the vehicle drive itself, pay a far smaller premium.

    The simple fact is that the main cause of traffic incidents is the human factor. Take that out of the equation and watch road collisions fall drastically

    The statement that road collisions will fall drastically is nothing but an article of faith for those who pray to the digital god. There is no proof for this statement. It's nothing more than dogma.

    Yes humans make mistakes and are fallible, but the idea of fallible humans being able to make infallible replacements of themselves I think is a huge conceit.

    I can't prove it, but I believe driving will prove to be a task that requires general intelligence and is not reducible to just a complex, but limited, set of variables and defined responses. No neural network can give a system general intelligence.


  • Closed Accounts Posts: 3,478 ✭✭✭eeguy


    cnocbui wrote: »
    The statement that road collisions will fall drastically is nothing but an article of faith for those who pray to the digital god. There is no proof for this statement. It's nothing more than dogma.

    There's some evidence so far that autonomous systems are already better than humans with average crashes per million miles well below humans. Waymo releases crash statistics regularly.

    I think you're too quick to judge and basing your judgement on what are poor examples of autonomous capabilities. Telsa is not an autonomous car. It's a fancy cruise control and lane assist. Uber is a rushed proof of concept that has had one death in 3 million miles, and blame isn't fully apportioned on Uber.

    Look at Waymo, GM, Renault and Mercedes to see what the technology is capable of but take that with a pinch of salt as we're still years from a proper release.


  • Advertisement
  • Moderators, Science, Health & Environment Moderators Posts: 19,392 Mod ✭✭✭✭Sam Russell


    One point to note about autonomous cars.

    It is that they will not be controlled by drunk or reckless computer control systems. They will not drive above the speed limit, nor will they break traffic lights or stop signs. All those actions on their own will reduce accident rates.

    Whether they can cope with human operators that do all those things is another question - pedestrians, cyclists, and motorists..


  • Registered Users Posts: 19,747 ✭✭✭✭cnocbui


    eeguy wrote: »
    There's some evidence so far that autonomous systems are already better than humans with average crashes per million miles well below humans. Waymo releases crash statistics regularly.

    I think you're too quick to judge and basing your judgement on what are poor examples of autonomous capabilities. Telsa is not an autonomous car. It's a fancy cruise control and lane assist. Uber is a rushed proof of concept that has had one death in 3 million miles, and blame isn't fully apportioned on Uber.

    Look at Waymo, GM, Renault and Mercedes to see what the technology is capable of but take that with a pinch of salt as we're still years from a proper release.

    Tesla should have had it's 'cruise control' disabled entirely by the NTSB by now. The fact this hasn't happened indicates the near nonexistent oversight of the whole process. It's clear to me that autonomous driving is being developed in an unsafe manner in the US, which is hardly surprising.

    Uber will be allowed to carry, on, despite killing someone. Astonishing that we will have a safer endgame result from unsafe participants being allowed to participate in an unsafe manner. Seems an illogical conclusion to me

    In 4 million miles of driving, Waymo have had 63 disengagements, and yet you and others claim the accident rate is less than for Humans. How did you reach that conclusion?

    What would the accident rate for Waymo be had there not been a human take control 63 times? I know Waymo classify their disengagements as safety critical and non-safety, claiming the former are few, but I don't agree with them on their being such a clear distinction.


  • Posts: 0 [Deleted User]


    cnocbui, I think no matter what evidence is put before you, you will dismiss and continue to say that its too complex a task for computers.

    You know what, that's totally fine. It should be challenged. Every new innovation is, from ways to build, insulate and power homes up to and including medical innovations. This is exactly how the bar is set for everything from the food industry to the local optician down the road.

    As challenges are met and achieved, things called standards and regulations are developed to regulate industries.

    But here's the thing, once the appropriate level of evidence is gathered and its shown that this is the safer/better option, it will be adopted on a wide-spread basis.

    Again, harping back to medical innovations, think of the "adverse side effects" warnings on drugs or the radiation emitted by x-rays and MRI's to name two.

    While there will always be a risk, once that risk is lower than the existing situation, it will be approved, it will be released, it will be used. If that risk is not lower, it won't.

    Given the amount of entities involved in making this work (its worth billions to whoever cracks level 5) it is simply a matter of time until the requirements are met and this is adopted en masse.


  • Closed Accounts Posts: 3,478 ✭✭✭eeguy


    cnocbui wrote: »
    Tesla should have had it's 'cruise control' disabled entirely by the NTSB by now. The fact this hasn't happened indicates the near nonexistent oversight of the whole process. It's clear to me that autonomous driving is being developed in an unsafe manner in the US, which is hardly surprising.

    Uber will be allowed to carry, on, despite killing someone. Astonishing that we will have a safer endgame result from unsafe participants being allowed to participate in an unsafe manner. Seems an illogical conclusion to me

    In 4 million miles of driving, Waymo have had 63 disengagements, and yet you and others claim the accident rate is less than for Humans. How did you reach that conclusion?

    What would the accident rate for Waymo be had there not been a human take control 63 times? I know Waymo classify their disengagements as safety critical and non-safety, claiming the former are few, but I don't agree with them on their being such a clear distinction.

    Tesla have enough disclaimers and data to show that their systems work then properly monitored.
    Uber have stopped for the time being, but people are going to die whether there's autonomous or not.
    Disengagements have drastically reduced over the years. They're also a bad metric to measure autonomous. One company could be driving well marked routes in Arizona and have zero, and another could be pushing the technology in snowy Seattle and having many.

    Again, what you see now is not a finished product. The next two years will be a watershed as Uber, Tesla, GM and Google bring large fleets on stream.


  • Moderators, Category Moderators, Arts Moderators, Sports Moderators Posts: 48,459 CMod ✭✭✭✭magicbastarder


    cnocbui wrote: »
    In 4 million miles of driving, Waymo have had 63 disengagements, and yet you and others claim the accident rate is less than for Humans. How did you reach that conclusion?
    so waymo - while in development phase - have had a disengagement once every 100,000km? and you're using this as evidence that it can't cope?
    as pointed out above, this is a rate experienced when the vast majority of other vehicles it has to cope with are human controlled. i'd guess (in car to car interactions) you could add a zero if the vast majority of other vehicles were autonomous.


  • Registered Users Posts: 19,747 ✭✭✭✭cnocbui


    so waymo - while in development phase - have had a disengagement once every 100,000km? and you're using this as evidence that it can't cope?
    as pointed out above, this is a rate experienced when the vast majority of other vehicles it has to cope with are human controlled. i'd guess (in car to car interactions) you could add a zero if the vast majority of other vehicles were autonomous.

    Not what I said, you are putting words in my mouth.

    We have three fatalities in 1/20th the Australian average driving distance to fatality ratio and you are making the argument that the Wamo disengagements are because of human bad driving? the logic you are using to reach that conclusion is, er, interesting.


  • Closed Accounts Posts: 3,478 ✭✭✭eeguy


    cnocbui wrote: »
    Not what I said, you are putting words in my mouth.

    We have three fatalities in 1/20th the Australian average driving distance to fatality ratio and you are making the argument that the Wamo disengagements are because of human bad driving? the logic you are using to reach that conclusion is, er, interesting.

    Where are your 3 fatalities that are 100% the fault of autonomous systems? I really think you're jumping the gun here.

    Why the Australian average? Because it's lower than the US average?

    You can see the exact reason for disengagements on their reports if you bothered to look before jumping to conclusions.


  • Registered Users Posts: 19,747 ✭✭✭✭cnocbui


    eeguy wrote: »
    Where are your 3 fatalities that are 100% the fault of autonomous systems? I really think you're jumping the gun here.


    Why the Australian average? Because it's lower than the US average?

    You can see the exact reason for disengagements on their reports if you bothered to look before jumping to conclusions.

    2 Teslas and one Uber.

    I have looked at the disengagement reasons and found them to be not exactly trivial - things like:

    Disengage for unwanted maneuver of the vehicle
    Disengage for hardware discrepancy
    Disengage for a software discrepancy
    Disengage for incorrect behavior prediction of other traffic participants
    Disengage for a perception discrepancy

    Interesting to note that in all of Waymo's testing that 'Disengage for a recklessly behaving road user' has only happened twice so bad driving by other road users is possibly overstated a bit as being the cause of Waymo's woes.

    Why Australia? - because they speak English, I'm familiar with the place and their stats fall near the middle of the range for OECD countries being 14th out of 32 (5.05 deaths per 100 K pop. vs OECD average of about 5.12). I'm just not that interested in being 100% politically correct and trying to root out the stats for Austria or France. If I had wanted to tilt the table I would be using Norwegian stats.


  • Closed Accounts Posts: 3,478 ✭✭✭eeguy


    1 Tesla was not on autopilot.
    1 Tesla is currently being investigated.
    1 Uber is currently being investigated.

    The disengagements show that the reasons are no clear cut as you make out. There's plenty of reasons for the driver to take over, not only when the car is doing something it shouldn't.

    Seeing as the cars are being trialled on US roads it makes sense to use US stats.


  • Advertisement
  • Registered Users Posts: 19,747 ✭✭✭✭cnocbui


    eeguy wrote: »
    1 Tesla was not on autopilot.
    1 Tesla is currently being investigated.
    1 Uber is currently being investigated.

    The disengagements show that the reasons are no clear cut as you make out. There's plenty of reasons for the driver to take over, not only when the car is doing something it shouldn't.

    Seeing as the cars are being trialled on US roads it makes sense to use US stats.

    If you want to use US stats, be my guest.

    Both fatal Tesla Accidents were while autopilot was engaged. Whoops, I missed the death in China so make that two confirmed, one probable, so likely 4 in all.

    You are kidding yourself if you think the two incident's being investigated aren't the fault of the autonomous sytems.
    Autopilot Cited in Death of Chinese Tesla Driver
    ...
    In an earlier blog post, Tesla said, “We have never seen this level of damage to a Model X in any other crash.” The extreme damage done to the victim’s vehicle, Tesla said, was due to an earlier crash that crushed the concrete divider’s aluminum crash attenuator, thus rendering the safety feature useless. It provided a photo taken the day before the fatal March 23 crash, showing that the feature had not been repaired.

    After retrieving the vehicle’s digital logs, the company announced on Friday that the Model X was driving with its semi-autonomous Autopilot system engaged.
    ...
    The first known death caused by a self-driving car was disclosed by Tesla Motors on Thursday, a development that is sure to cause consumers to second-guess the trust they put in the booming autonomous vehicle industry.

    The 7 May accident occurred in Williston, Florida, after the driver, Joshua Brown, 40, of Ohio put his Model S into Tesla’s autopilot mode, which is able to control the car during highway driving.
    What happens when Tesla’s AutoPilot goes wrong: owners post swerving videos
    Read more

    Against a bright spring sky, the car’s sensors system failed to distinguish a large white 18-wheel truck and trailer crossing the highway, Tesla said. The car attempted to drive full speed under the trailer, “with the bottom of the trailer impacting the windshield of the Model S”, Tesla said in a blogpost.


Advertisement