Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

A.I.???

  • 20-09-2010 5:22pm
    #1
    Registered Users, Registered Users 2 Posts: 5,706 ✭✭✭


    Do you think self aware artificial intelligence is possible? As in a super-computer that will some day ask the question What am i?

    Will such a concept always be science fiction?

    If you do think its possible, how far away are we from it?


Comments

  • Registered Users, Registered Users 2 Posts: 1,005 ✭✭✭Enkidu


    I would assume it's possible. If organic matter can be arranged into an intelligent form, it's possible that a turing machine connected by metal, e.t.c. could.

    However most AI researches are extremely skeptical that we are remotely near that point. Especially in recent years with the neuroplasticity properties of the brain that have become apparent.


  • Registered Users, Registered Users 2 Posts: 665 ✭✭✭Alt_Grrr


    If you do think its possible, how far away are we from it?

    its a long way off...
    We are barely beginning to understand to the human brain and what is intelligence in the first place...

    This area is slowly being explored by Neuroscientists and Computer Scientists.

    In the past, the mistake that was make was trying to program a computer to be intelligent, now we are trying to mimic the way humans learn in the hope that over time we can develop systems which can adapt better to new situations.

    A computer with an independent method of thought and self awareness and ability to question itself is a very long way away indeed...


  • Registered Users, Registered Users 2 Posts: 2,164 ✭✭✭cavedave


    We are barely beginning to understand to the human brain and what is intelligence in the first place...

    I think that might be a non sequitur. It seems to assume we need to understand human consciousness before we can create machine consciousness. But we did not need to understand thermodynamics before creating hot air balloons or bird flight (fully) before creating plains. Engineers regularly create things and then the scientists come into explain them. The Industrial revolution chapter in this book has many more example
    A computer with an independent method of thought and self awareness and ability to question itself is a very long way away indeed...

    Even ameoba are self aware to an extent. Using self awareness as a yard stick makes AI keep moving into the distance. Am I as self aware as a buddist monk, a physicist or a brilliant athlete? In neither the physical, mental or spiritual domains am I up to scratch. Hopefully I am still conscious though.

    Things that were once thought to require intelligence now just need brute force calculation. chess for example. Pretty soon many more intelligent activities will become just calculation. driverless cars and planes. Automatic traffic routing. Stock trading. Automation can keep increasing as programs get better without suddenly needing strong AI intelligence.


  • Closed Accounts Posts: 5,064 ✭✭✭Gurgle


    Depending on where you put the goalposts, AI has been done, will be done soon, will be done sometime or is impossible.

    Sentience is a different story, I'd answer soonish and make no attempt to be more specific.


  • Registered Users, Registered Users 2 Posts: 5,706 ✭✭✭fonecrusher1


    Gurgle wrote: »

    Sentience is a different story, I'd answer soonish and make no attempt to be more specific.

    Im afraid thats a very vague & if i may be honest lazy answer. Especially compared with the other replies. No offence.

    I mentioned self-awareness as a breakthrough point for real AI because is that not a question (what am i?) that would be asked by something approaching human intelligence? I mean the human brain is what computer scientists are trying to mimic are they not? Its a benchmark.

    A super computer that boasts many many calculations per second, with millions of preprogrammed hypothetical scenario solutions/reactions is not AI & is never going to be a match for something created with a learning ability.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 665 ✭✭✭Alt_Grrr


    cavedave wrote: »
    I think that might be a non sequitur. It seems to assume we need to understand human consciousness before we can create machine consciousness. But we did not need to understand thermodynamics before creating hot air balloons or bird flight (fully) before creating plains. Engineers regularly create things and then the scientists come into explain them. The Industrial revolution chapter in this book has many more example

    Yet, there are many things we need to understand before we can say build a bridge or even a hot air balloon.

    Happening upon an artificial intelligence by chance someday and then tracking back to understand it is unlikely to happen.

    cavedave wrote: »
    Even ameoba are self aware to an extent. Using self awareness as a yard stick makes AI keep moving into the distance. Am I as self aware as a buddist monk, a physicist or a brilliant athlete? In neither the physical, mental or spiritual domains am I up to scratch. Hopefully I am still conscious though.

    A computer is a machine, it lacks even the intelligence of a single celled organism. The Yard stick of self awareness and ability to question ones own existence is a very different one. You need to look to the philosophers.
    cavedave wrote: »
    Things that were once thought to require intelligence now just need brute force calculation. chess for example. Pretty soon many more intelligent activities will become just calculation. driverless cars and planes. Automatic traffic routing. Stock trading. Automation can keep increasing as programs get better without suddenly needing strong AI intelligence.

    first off, chess is not done by computers by brute force.
    never has, never will be. by brute force. Its machine learning, the computer can learn and optimise its understanding the more it plays.

    It does poses the the ability to look many moves ahead (Deep blue computes usually about 8 moves ahead), but not every possible combination of moves. (It may be only 64 squares and 32 pieces, but theres still a lot of possible moves, to compute them all would take longer then the life time of the universe)

    Again all those examples are looked at as machine learning problems, (With the exception of Stock Trading which is seen as a chaotic system and will need some stronger intelligence going into the future, based purely on the automated near market crash that happened this year)

    Machine Learning is a form of automation, but it improves its outputs by minute mathematical adjustments.
    Pure automation will not solve all the problems though, many machine learning algorithms require some manual intervention to help them get better results.
    Gurgle wrote: »
    Depending on where you put the goalposts, AI has been done, will be done soon, will be done sometime or is impossible.

    Sentience is a different story, I'd answer soonish and make no attempt to be more specific.

    I'm quite sure that Sentience and the Sci-Fi version of AI will not happen in our lifetimes.


  • Registered Users, Registered Users 2 Posts: 2,164 ✭✭✭cavedave


    Alt_Grrr

    Happening upon an artificial intelligence by chance someday and then tracking back to understand it is unlikely to happen.

    I would say it is quite likely to happen. Wikipedia in many ways knows more than I do. With some only slightly better LbR algorithms then we currently have combined with vast quantities of data on the web (particularly the semantic web) and you could have an intelligent system. Not intelligent in the way humans are. A vastly different intelligence, but thats a bit like giving out to submarines for not being able to swim.
    The Yard stick of self awareness and ability to question ones own existence is a very different one. You need to look to the philosophers.
    Quite possible. But even here Turing test programs are getting better all the time. When they do past the test I assume we will say the test is meaningless as these programs do not really question their own existence.

    Many humans lead very different internal lives to what is generally considered normal. Take autistic people. I read a book of Temple Grandin and it left we with profound doubts as to what normal thinking amounts to.
    first off, chess is not done by computers by brute force.
    never has, never will be. by brute force
    True sorry. Search algorithms and heuristics combined with better brute force computation improvements improved chess computing.


  • Registered Users, Registered Users 2 Posts: 1,407 ✭✭✭Promac


    You don't have to understand how a brain works in order to create something we'd call "intelligent". Some people are trying to model the brain but that's only one way of going about it. I'd imagine it will come about by accident - I reckon it'll be more of an ermergent thing than a design thing.

    Making software that is intelligent is fairly easy - the difficult part is making it curious, giving it the ability to be bored, giving it ambition and motivation - creativity and the ability to imagine. We can make computers that win at chess but can we make computers that want to win at chess? What reason would it have to want that? What motivation?

    Check out The singularity for related ideas on when we can expect "AI" to show up and save the world for us.


  • Registered Users, Registered Users 2 Posts: 15,443 ✭✭✭✭bonkey


    Im afraid thats a very vague & if i may be honest lazy answer. Especially compared with the other replies. No offence.

    Depends on how you look at it.

    Sustainable Nuclear Fusion has been about 30 years away for the last 30 years.

    Some wags will say that this is likely to remain the case for at least another 30 years.

    Its impossible to predict when a breakthrough will be made. We can perhaps set out the preconditions we think will need to be met...but timscales are a different matter.

    Soonish, in that regard, is a pretty honest answer.
    A super computer that boasts many many calculations per second, with millions of preprogrammed hypothetical scenario solutions/reactions is not AI & is never going to be a match for something created with a learning ability.

    Actually, that very much depends on the problem-space its being applied to, as well as a number of other factors....but isn't really relevant to the question at hand.


  • Closed Accounts Posts: 5,064 ✭✭✭Gurgle


    A super computer that boasts many many calculations per second, with millions of preprogrammed hypothetical scenario solutions/reactions is not AI & is never going to be a match for something created with a learning ability.
    bonkey wrote: »
    Actually, that very much depends on the problem-space its being applied to, as well as a number of other factors....but isn't really relevant to the question at hand.

    Just applying some numbers here, rough and ready prediction of when AI will become possible.

    Human brain consists ~100 billion neurons each of which takes ~1ms to react to a change in its inputs. Assuming a simple linear multiplication of processing power (i.e. two is twice as good as one), thats the equivalent of one 100THz processor (or 10 10THz processors).

    Whats current?
    Forget military / stock exchange / LHC type supercomputers, and assume a level available to the average home builder with a decent paycheck.
    e.g. an 8-core 3GHz, combined to 24GHz.

    Drop the numbers into Moore's law and we get a human brain equivalent processor in 2028.

    Thats fairly soonish.


  • Advertisement
  • Moderators, Recreation & Hobbies Moderators, Science, Health & Environment Moderators, Technology & Internet Moderators Posts: 93,567 Mod ✭✭✭✭Capt'n Midnight


    Gurgle wrote: »
    Human brain consists ~100 billion neurons each of which takes ~1ms to react to a change in its inputs. Assuming a simple linear multiplication of processing power (i.e. two is twice as good as one), thats the equivalent of one 100THz processor (or 10 10THz processors).
    The brain works in parallel so a botnet should already have the processing power ;)

    But while the outputs from a neuron are digital the inputs aren't. You have to sum up thousands of inputs to decide if a neuron will fire or not, so a tad more processing power would be needed to simulate the 10^14 synapses.

    But you can make solid state analogs of nerve cells - http://www.rsc.org/chemistryworld/News/2010/March/04031001.asp


    The real AI problem is that we don't know how the brain works


  • Registered Users, Registered Users 2 Posts: 5,706 ✭✭✭fonecrusher1



    But you can make solid state analogs of nerve cells - http://www.rsc.org/chemistryworld/News/2010/March/04031001.asp


    The real AI problem is that we don't know how the brain works

    My brain hurts from reading all that techno-jargon!!! Jesus. What is it exactly they're on about? Its a new type of artificial memory that develops like animal neurons is it?


Advertisement