Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

The Singularity

Options
  • 02-05-2011 12:34am
    #1
    Registered Users Posts: 2,734 ✭✭✭


    Do you believe this is inevitable or is it science fiction speculation?

    Singularity being a future time when techological and scientific change is so fast we cannot even imagine what will happen from our present perspective, and when humanity will become posthumanity


«13

Comments

  • Registered Users Posts: 26,578 ✭✭✭✭Turtwig


    I'd say it's more likely we will have reverted back to some kind of dark ages (or worse!) than any sort of era where we become technologically adept.


  • Moderators, Society & Culture Moderators Posts: 25,558 Mod ✭✭✭✭Dades


    Not sure what this has to do with A&A... but we do like our SF here. :pac:

    I think it's a possibility, rather than an inevitability.


  • Registered Users Posts: 4,718 ✭✭✭The Mad Hatter


    I don't know, but you'd better make your next post really good.


  • Registered Users Posts: 26,578 ✭✭✭✭Turtwig


    I don't know, but you'd better make your next post really good.

    allison-stokke.jpg

    Happy now?


  • Closed Accounts Posts: 17,485 ✭✭✭✭Ickle Magoo


    Was TMH post not in relation to the OP's next post being their thousandth... :confused:


  • Advertisement
  • Registered Users Posts: 2,734 ✭✭✭sxt


    I would say this only forum open to this kind of question.

    I watched a documentary on Ray Kurzweil,he believes that "Artificial inteligence will wake up the universe"

    At the end of the documentary he was asked ,Does God exist?...He answered

    "Well, I would say not yet.."


  • Closed Accounts Posts: 16,707 ✭✭✭✭Tigger


    Malty_T wrote: »
    allison-stokke.jpg

    Happy now?
    l
    f - you mona lisa


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    sxt wrote: »
    Do you believe this is inevitable or is it science fiction speculation?

    Singularity being a future time when techological and scientific change is so fast we cannot even imagine what will happen from our present perspective, and when humanity will become posthumanity

    I think it actually more the point when we create intelligent artificial life that can create more intelligent versions of themselves and we become subservient to them.


  • Registered Users Posts: 2,534 ✭✭✭Soul Winner


    Malty_T wrote: »
    allison-stokke.jpg

    Happy now?

    Stay on topic please Malty, there's no need for that, but thanks anyway... ;)


  • Registered Users Posts: 3,809 ✭✭✭CerebralCortex


    Wicknight wrote: »
    I think it actually more the point when we create intelligent artificial life that can create more intelligent versions of themselves and we become subservient to them.

    A possible scenario, but at the same time pessimistic. Let's not forget we're machines, with the potential for intelligence augmentation.


  • Advertisement
  • Registered Users Posts: 12,968 ✭✭✭✭bnt


    I don't know that it's inevitable, but I do know there are people working to make it happen. If you look at the work of Stephen Wolfram (the guy behind Mathematica and Wolfram Alpha), for example, there's definitely the ambition.

    With computers you inevitably have the "garbage in, garbage out" problem: your solution is only as good as your inputs. So I think that part of the "Singularity" process will involve computers taking control of their inputs, gathering information to an agenda they set - rather than relying on drip-fed info from "wetware". :cool:

    From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, ‘Look at that, you son of a bitch’.

    — Edgar Mitchell, Apollo 14 Astronaut



  • Moderators, Category Moderators, Politics Moderators, Recreation & Hobbies Moderators, Society & Culture Moderators Posts: 81,309 CMod ✭✭✭✭coffee_cake


    Could they not have called it something a bit more original, next thing you know people won't know what singularity means


  • Moderators, Music Moderators Posts: 25,868 Mod ✭✭✭✭Doctor DooM





    A nice explanation of the Singularity thing.


  • Registered Users Posts: 2,164 ✭✭✭cavedave


    Many singularities have already happened. To take the most recent. Humans develop farming, trade and specialisation about 5000 years ago. Economic growth rates go from nothing to .01% a year.

    1800 the industrial revolution moves to using fossil fuels for power. And increases trade and specialisation. Economic growth goes up to 3% a year.

    The next singularity if it was the same size economic growth would see a doubling in the size of the economy every 2 weeks. Hanson lays out the history here.

    This is the great filter theory . A useful way to question whether we will pass through the next singularity is to wonder how difficult it was for us to pass through the last few.

    The Chinese the Greeks and many others nearly had the industrial revolution. The Romans in particular seem to have been really close. We look post human to preindustrial people. We live about twice as long. We dont expect our siblings and children to die as children. We don't expect birth to kill us. We regard slavery as abhorrent. We work 40 hours a week sitting down. Have internet fights with people on the other side of the world...


  • Closed Accounts Posts: 1,780 ✭✭✭liamw


    Allison Stokke = posthuman


  • Closed Accounts Posts: 27,857 ✭✭✭✭Dave!


    PZ Myers has no time for Ray Kurzweil, and has written some rather vitriolic stuff about him on Pharyngula. Made me realise what a cúnt PZ is, and I stopped following his blog from then.

    I've never watched or read anything about Kurzweil, so can't really comment on it.


  • Registered Users Posts: 677 ✭✭✭Doc_Savage


    to the OP;
    Read "Ilium" by Dan Simmons.

    you'd love it!:D


  • Registered Users Posts: 3,809 ✭✭✭CerebralCortex


    Dave! wrote: »
    PZ Myers has no time for Ray Kurzweil, and has written some rather vitriolic stuff about him on Pharyngula. Made me realise what a cúnt PZ is, and I stopped following his blog from then.

    I've never watched or read anything about Kurzweil, so can't really comment on it.

    Interesting you should say that Dave!. I'm no Kurzweil fanboy and I think he does hurt his image in his approach, but I think what he talks about is fairly sane and reasonable and yes PZ's commentary was way out of line.


  • Registered Users Posts: 445 ✭✭yammycat


    Theres a new ipad coming out ...


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    A possible scenario, but at the same time pessimistic. Let's not forget we're machines, with the potential for intelligence augmentation.

    Oh I agree. I work in computer science, how mainsteam media portrays this AI take over (Terminator, The Matrix) is quite ridiculous. Computers are machines and machines only do what we tell them. The idea that they would desire stuff independently of us telling them to desire it does not hold, it shows a lack of understanding of why we desire stuff (ie evolution)


  • Advertisement
  • Closed Accounts Posts: 1,780 ✭✭✭liamw


    Wicknight wrote: »
    Oh I agree. I work in computer science, how mainsteam media portrays this AI take over (Terminator, The Matrix) is quite ridiculous. Computers are machines and machines only do what we tell them. The idea that they would desire stuff independently of us telling them to desire it does not hold, it shows a lack of understanding of why we desire stuff (ie evolution)

    Do you not think we can view the human brain as a state machine (albeit an extremely complex one)?

    If we did capture the brain as a state machine, while the state transitions are deterministic they appear to be non-deterministic.

    So while I think the portrayal by mainstream media may be a bit ridiculous, theoretically is it not possible that a complex state machine could be programmed in the future (biochemically or electronically) that behaves similar to a human brain?


  • Moderators, Category Moderators, Politics Moderators, Recreation & Hobbies Moderators, Society & Culture Moderators Posts: 81,309 CMod ✭✭✭✭coffee_cake


    Wicknight wrote: »
    Oh I agree. I work in computer science, how mainsteam media portrays this AI take over (Terminator, The Matrix) is quite ridiculous. Computers are machines and machines only do what we tell them. The idea that they would desire stuff independently of us telling them to desire it does not hold, it shows a lack of understanding of why we desire stuff (ie evolution)

    Some fairly open ended commands could lead to chaos though


  • Registered Users Posts: 26,578 ✭✭✭✭Turtwig


    1/0.


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    liamw wrote: »
    Do you not think we can view the human brain as a state machine (albeit an extremely complex one)?

    If we did capture the brain as a state machine, while the state transitions are deterministic they appear to be non-deterministic.

    So while I think the portrayal by mainstream media may be a bit ridiculous, theoretically is it not possible that a complex state machine could be programmed in the future (biochemically or electronically) that behaves similar to a human brain?

    Sure, but the machine won't do things we haven't built it to do.

    Hollywood tend to ignore this, assuming that something "alive" will do all the things animals do, such as fight for survival, protect itself, view itself as self important etc.

    We do those things because we have evolved to do them. An AI won't do those things unless we decide it should do those things. Take Terminator. The argument is that as soon as Skynet became self aware it tried to protect itself from being shut off. Why? We would do that because we have a natural survival instinct that we have evolved. But Skynet won't have that unless the programmers decided to give it to him.

    In some ways this relates back to religious thinking, theory of the mind stuff, viewing agency in nature. We assume that if Skynet is alive in a basic sense then it will have properties we associate with such a being. We don't need to have it explained to us why Skynet defended himself, we naturally assume he would because we naturally assume living things do this.

    In reality he wouldn't unless we wrote him to. Bit like the animals in Hitchhikers Guide who have been engineered to want to be eaten. :P


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    bluewolf wrote: »
    Some fairly open ended commands could lead to chaos though

    Scratchy = Kill
    Humans = Don't Kill

    Should be simple enough.


  • Registered Users Posts: 2,164 ✭✭✭cavedave


    I am suspicious of people who think they can predict how the next singularity will happen.

    Adam Smith (a very smart guy) got James Watt the job that caused the last singularity and Smith did not realise it was happening. If someone that close to it didn't see the last one I doubt anyone will see this one.


  • Moderators, Society & Culture Moderators Posts: 25,558 Mod ✭✭✭✭Dades


    Wicknight wrote: »
    We do those things because we have evolved to do them. An AI won't do those things unless we decide it should do those things. Take Terminator. The argument is that as soon as Skynet became self aware it tried to protect itself from being shut off. Why? We would do that because we have a natural survival instinct that we have evolved. But Skynet won't have that unless the programmers decided to give it to him.
    Yeah, but who's to say that in their desire to create a more "realistic" AI, someone isn't going to start recklessly programming such characteristics? Why assume that just because Hollywood have seized on the notion that it's never going to happen? :pac:


  • Closed Accounts Posts: 1,780 ✭✭✭liamw


    Wicknight wrote: »
    Sure, but the machine won't do things we haven't built it to do.

    Well yes, a machine is programmed with a defined set of instructions/states.

    My point was that if you built the machine with sensory inputs, states and memory analogous to the human brain, then theoretically I don't see why it couldn't behave like a human.

    The machine is indeed adhering to the instructions that the programmer intended, but the states and I/Os are so complex the behaviour is chaotic and practically unpredictable.

    Also, if you designed a bot that could reproduce itself with minor variations in the build instructions (akin to mutations in genetic code), and added a selection pressure and fitness functions then that machine could 'evolve' just like humans do biologically?

    FYI: I would consider the human brain a deterministic machine, but would be interested in a discussion on this.


  • Registered Users Posts: 3,247 ✭✭✭stevejazzx


    Wicknight wrote: »
    Sure, but the machine won't do things we haven't built it to do.

    Depends surely on the way its programmed? In the future perhaps programming will be able to give a machine such open-endedness that it essentially ends up doing things it was never programmed for?

    Hollywood tend to ignore this, assuming that something "alive" will do all the things animals do, such as fight for survival, protect itself, view itself as self important etc.

    That's true but take hal for example - it was protecting the mission by trying to kill Dave so harming humans as a means to allowing something else to survive is a possibility.
    We do those things because we have evolved to do them. An AI won't do those things unless we decide it should do those things. Take Terminator. The argument is that as soon as Skynet became self aware it tried to protect itself from being shut off. Why? We would do that because we have a natural survival instinct that we have evolved. But Skynet won't have that unless the programmers decided to give it to him.

    In the case of skynet and similar ai sentience brings a new mysterious quality of self preservation. However this mysterious quality may be explained by underlying code. whose to say that humans didn't start off the same?



    In some ways this relates back to religious thinking, theory of the mind stuff, viewing agency in nature. We assume that if Skynet is alive in a basic sense then it will have properties we associate with such a being. We don't need to have it explained to us why Skynet defended himself, we naturally assume he would because we naturally assume living things do this.


    No i don't think this is the thinking at all; I think the idea is that sentience brings a sophistication of the underlying code that enhances it's understanding of what it is to be alive. Very sci-fi I grant you but not beyond the realms of possibility considering our origins from star dust to now.


  • Advertisement
  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    Dades wrote: »
    Yeah, but who's to say that in their desire to create a more "realistic" AI, someone isn't going to start recklessly programming such characteristics?

    Nothing, but it won't come as a surprise. No one is going to be like "Oh, I had no idea that the AI program we put in charge of the American nuclear arsenal and that we programmed to do anything it can to survive, was going to do anything it can to survive" :P


Advertisement