Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

The Singularity

  • 01-05-2011 11:34pm
    #1
    Registered Users, Registered Users 2 Posts: 2,759 ✭✭✭sxt


    Do you believe this is inevitable or is it science fiction speculation?

    Singularity being a future time when techological and scientific change is so fast we cannot even imagine what will happen from our present perspective, and when humanity will become posthumanity


«1

Comments

  • Registered Users, Registered Users 2 Posts: 26,578 ✭✭✭✭Turtwig


    I'd say it's more likely we will have reverted back to some kind of dark ages (or worse!) than any sort of era where we become technologically adept.


  • Moderators, Society & Culture Moderators Posts: 25,558 Mod ✭✭✭✭Dades


    Not sure what this has to do with A&A... but we do like our SF here. :pac:

    I think it's a possibility, rather than an inevitability.


  • Registered Users, Registered Users 2 Posts: 4,718 ✭✭✭The Mad Hatter


    I don't know, but you'd better make your next post really good.


  • Registered Users, Registered Users 2 Posts: 26,578 ✭✭✭✭Turtwig


    I don't know, but you'd better make your next post really good.

    allison-stokke.jpg

    Happy now?


  • Closed Accounts Posts: 17,485 ✭✭✭✭Ickle Magoo


    Was TMH post not in relation to the OP's next post being their thousandth... :confused:


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 2,759 ✭✭✭sxt


    I would say this only forum open to this kind of question.

    I watched a documentary on Ray Kurzweil,he believes that "Artificial inteligence will wake up the universe"

    At the end of the documentary he was asked ,Does God exist?...He answered

    "Well, I would say not yet.."


  • Closed Accounts Posts: 16,705 ✭✭✭✭Tigger


    Malty_T wrote: »
    allison-stokke.jpg

    Happy now?
    l
    f - you mona lisa


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    sxt wrote: »
    Do you believe this is inevitable or is it science fiction speculation?

    Singularity being a future time when techological and scientific change is so fast we cannot even imagine what will happen from our present perspective, and when humanity will become posthumanity

    I think it actually more the point when we create intelligent artificial life that can create more intelligent versions of themselves and we become subservient to them.


  • Registered Users, Registered Users 2 Posts: 2,534 ✭✭✭Soul Winner


    Malty_T wrote: »
    allison-stokke.jpg

    Happy now?

    Stay on topic please Malty, there's no need for that, but thanks anyway... ;)


  • Registered Users, Registered Users 2 Posts: 3,809 ✭✭✭CerebralCortex


    Wicknight wrote: »
    I think it actually more the point when we create intelligent artificial life that can create more intelligent versions of themselves and we become subservient to them.

    A possible scenario, but at the same time pessimistic. Let's not forget we're machines, with the potential for intelligence augmentation.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 13,077 ✭✭✭✭bnt


    I don't know that it's inevitable, but I do know there are people working to make it happen. If you look at the work of Stephen Wolfram (the guy behind Mathematica and Wolfram Alpha), for example, there's definitely the ambition.

    With computers you inevitably have the "garbage in, garbage out" problem: your solution is only as good as your inputs. So I think that part of the "Singularity" process will involve computers taking control of their inputs, gathering information to an agenda they set - rather than relying on drip-fed info from "wetware". :cool:

    You are the type of what the age is searching for, and what it is afraid it has found. I am so glad that you have never done anything, never carved a statue, or painted a picture, or produced anything outside of yourself! Life has been your art. You have set yourself to music. Your days are your sonnets.

    ―Oscar Wilde predicting Social Media, in The Picture of Dorian Gray



  • Posts: 0 CMod ✭✭✭✭ Siena Rapid Test


    Could they not have called it something a bit more original, next thing you know people won't know what singularity means


  • Moderators, Music Moderators Posts: 25,872 Mod ✭✭✭✭Doctor DooM





    A nice explanation of the Singularity thing.


  • Registered Users, Registered Users 2 Posts: 2,164 ✭✭✭cavedave


    Many singularities have already happened. To take the most recent. Humans develop farming, trade and specialisation about 5000 years ago. Economic growth rates go from nothing to .01% a year.

    1800 the industrial revolution moves to using fossil fuels for power. And increases trade and specialisation. Economic growth goes up to 3% a year.

    The next singularity if it was the same size economic growth would see a doubling in the size of the economy every 2 weeks. Hanson lays out the history here.

    This is the great filter theory . A useful way to question whether we will pass through the next singularity is to wonder how difficult it was for us to pass through the last few.

    The Chinese the Greeks and many others nearly had the industrial revolution. The Romans in particular seem to have been really close. We look post human to preindustrial people. We live about twice as long. We dont expect our siblings and children to die as children. We don't expect birth to kill us. We regard slavery as abhorrent. We work 40 hours a week sitting down. Have internet fights with people on the other side of the world...


  • Closed Accounts Posts: 1,780 ✭✭✭liamw


    Allison Stokke = posthuman


  • Closed Accounts Posts: 27,857 ✭✭✭✭Dave!


    PZ Myers has no time for Ray Kurzweil, and has written some rather vitriolic stuff about him on Pharyngula. Made me realise what a cúnt PZ is, and I stopped following his blog from then.

    I've never watched or read anything about Kurzweil, so can't really comment on it.


  • Registered Users, Registered Users 2 Posts: 677 ✭✭✭Doc_Savage


    to the OP;
    Read "Ilium" by Dan Simmons.

    you'd love it!:D


  • Registered Users, Registered Users 2 Posts: 3,809 ✭✭✭CerebralCortex


    Dave! wrote: »
    PZ Myers has no time for Ray Kurzweil, and has written some rather vitriolic stuff about him on Pharyngula. Made me realise what a cúnt PZ is, and I stopped following his blog from then.

    I've never watched or read anything about Kurzweil, so can't really comment on it.

    Interesting you should say that Dave!. I'm no Kurzweil fanboy and I think he does hurt his image in his approach, but I think what he talks about is fairly sane and reasonable and yes PZ's commentary was way out of line.


  • Registered Users, Registered Users 2 Posts: 445 ✭✭yammycat


    Theres a new ipad coming out ...


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    A possible scenario, but at the same time pessimistic. Let's not forget we're machines, with the potential for intelligence augmentation.

    Oh I agree. I work in computer science, how mainsteam media portrays this AI take over (Terminator, The Matrix) is quite ridiculous. Computers are machines and machines only do what we tell them. The idea that they would desire stuff independently of us telling them to desire it does not hold, it shows a lack of understanding of why we desire stuff (ie evolution)


  • Advertisement
  • Closed Accounts Posts: 1,780 ✭✭✭liamw


    Wicknight wrote: »
    Oh I agree. I work in computer science, how mainsteam media portrays this AI take over (Terminator, The Matrix) is quite ridiculous. Computers are machines and machines only do what we tell them. The idea that they would desire stuff independently of us telling them to desire it does not hold, it shows a lack of understanding of why we desire stuff (ie evolution)

    Do you not think we can view the human brain as a state machine (albeit an extremely complex one)?

    If we did capture the brain as a state machine, while the state transitions are deterministic they appear to be non-deterministic.

    So while I think the portrayal by mainstream media may be a bit ridiculous, theoretically is it not possible that a complex state machine could be programmed in the future (biochemically or electronically) that behaves similar to a human brain?


  • Posts: 0 CMod ✭✭✭✭ Siena Rapid Test


    Wicknight wrote: »
    Oh I agree. I work in computer science, how mainsteam media portrays this AI take over (Terminator, The Matrix) is quite ridiculous. Computers are machines and machines only do what we tell them. The idea that they would desire stuff independently of us telling them to desire it does not hold, it shows a lack of understanding of why we desire stuff (ie evolution)

    Some fairly open ended commands could lead to chaos though


  • Registered Users, Registered Users 2 Posts: 26,578 ✭✭✭✭Turtwig


    1/0.


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    liamw wrote: »
    Do you not think we can view the human brain as a state machine (albeit an extremely complex one)?

    If we did capture the brain as a state machine, while the state transitions are deterministic they appear to be non-deterministic.

    So while I think the portrayal by mainstream media may be a bit ridiculous, theoretically is it not possible that a complex state machine could be programmed in the future (biochemically or electronically) that behaves similar to a human brain?

    Sure, but the machine won't do things we haven't built it to do.

    Hollywood tend to ignore this, assuming that something "alive" will do all the things animals do, such as fight for survival, protect itself, view itself as self important etc.

    We do those things because we have evolved to do them. An AI won't do those things unless we decide it should do those things. Take Terminator. The argument is that as soon as Skynet became self aware it tried to protect itself from being shut off. Why? We would do that because we have a natural survival instinct that we have evolved. But Skynet won't have that unless the programmers decided to give it to him.

    In some ways this relates back to religious thinking, theory of the mind stuff, viewing agency in nature. We assume that if Skynet is alive in a basic sense then it will have properties we associate with such a being. We don't need to have it explained to us why Skynet defended himself, we naturally assume he would because we naturally assume living things do this.

    In reality he wouldn't unless we wrote him to. Bit like the animals in Hitchhikers Guide who have been engineered to want to be eaten. :P


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    bluewolf wrote: »
    Some fairly open ended commands could lead to chaos though

    Scratchy = Kill
    Humans = Don't Kill

    Should be simple enough.


  • Registered Users, Registered Users 2 Posts: 2,164 ✭✭✭cavedave


    I am suspicious of people who think they can predict how the next singularity will happen.

    Adam Smith (a very smart guy) got James Watt the job that caused the last singularity and Smith did not realise it was happening. If someone that close to it didn't see the last one I doubt anyone will see this one.


  • Moderators, Society & Culture Moderators Posts: 25,558 Mod ✭✭✭✭Dades


    Wicknight wrote: »
    We do those things because we have evolved to do them. An AI won't do those things unless we decide it should do those things. Take Terminator. The argument is that as soon as Skynet became self aware it tried to protect itself from being shut off. Why? We would do that because we have a natural survival instinct that we have evolved. But Skynet won't have that unless the programmers decided to give it to him.
    Yeah, but who's to say that in their desire to create a more "realistic" AI, someone isn't going to start recklessly programming such characteristics? Why assume that just because Hollywood have seized on the notion that it's never going to happen? :pac:


  • Closed Accounts Posts: 1,780 ✭✭✭liamw


    Wicknight wrote: »
    Sure, but the machine won't do things we haven't built it to do.

    Well yes, a machine is programmed with a defined set of instructions/states.

    My point was that if you built the machine with sensory inputs, states and memory analogous to the human brain, then theoretically I don't see why it couldn't behave like a human.

    The machine is indeed adhering to the instructions that the programmer intended, but the states and I/Os are so complex the behaviour is chaotic and practically unpredictable.

    Also, if you designed a bot that could reproduce itself with minor variations in the build instructions (akin to mutations in genetic code), and added a selection pressure and fitness functions then that machine could 'evolve' just like humans do biologically?

    FYI: I would consider the human brain a deterministic machine, but would be interested in a discussion on this.


  • Registered Users, Registered Users 2 Posts: 3,247 ✭✭✭stevejazzx


    Wicknight wrote: »
    Sure, but the machine won't do things we haven't built it to do.

    Depends surely on the way its programmed? In the future perhaps programming will be able to give a machine such open-endedness that it essentially ends up doing things it was never programmed for?

    Hollywood tend to ignore this, assuming that something "alive" will do all the things animals do, such as fight for survival, protect itself, view itself as self important etc.

    That's true but take hal for example - it was protecting the mission by trying to kill Dave so harming humans as a means to allowing something else to survive is a possibility.
    We do those things because we have evolved to do them. An AI won't do those things unless we decide it should do those things. Take Terminator. The argument is that as soon as Skynet became self aware it tried to protect itself from being shut off. Why? We would do that because we have a natural survival instinct that we have evolved. But Skynet won't have that unless the programmers decided to give it to him.

    In the case of skynet and similar ai sentience brings a new mysterious quality of self preservation. However this mysterious quality may be explained by underlying code. whose to say that humans didn't start off the same?



    In some ways this relates back to religious thinking, theory of the mind stuff, viewing agency in nature. We assume that if Skynet is alive in a basic sense then it will have properties we associate with such a being. We don't need to have it explained to us why Skynet defended himself, we naturally assume he would because we naturally assume living things do this.


    No i don't think this is the thinking at all; I think the idea is that sentience brings a sophistication of the underlying code that enhances it's understanding of what it is to be alive. Very sci-fi I grant you but not beyond the realms of possibility considering our origins from star dust to now.


  • Advertisement
  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    Dades wrote: »
    Yeah, but who's to say that in their desire to create a more "realistic" AI, someone isn't going to start recklessly programming such characteristics?

    Nothing, but it won't come as a surprise. No one is going to be like "Oh, I had no idea that the AI program we put in charge of the American nuclear arsenal and that we programmed to do anything it can to survive, was going to do anything it can to survive" :P


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    liamw wrote: »
    Well yes, a machine is programmed with a defined set of instructions/states.

    My point was that if you built the machine with sensory inputs, states and memory analogous to the human brain, then theoretically I don't see why it couldn't behave like a human.

    Well depends on what you mean by states analogous to the human brain. Obviously the close you get the machine to work as a human the more human like it will be. But you have to make it as such. It is not just going to be human like because you give it AI.
    liamw wrote: »
    The machine is indeed adhering to the instructions that the programmer intended, but the states and I/Os are so complex the behaviour is chaotic and practically unpredictable.

    Within certain paramaters. Another Hollywood favorite is a computer AI doing things he was never designed to do, such as getting out onto the Internet, or getting into the security system of the building it is in (normally so it can kill those in the building)
    liamw wrote: »
    Also, if you designed a bot that could reproduce itself with minor variations in the build instructions (akin to mutations in genetic code), and added a selection pressure and fitness functions then that machine could 'evolve' just like humans do biologically?
    It could, but it would probably take millions of years.
    liamw wrote: »
    FYI: I would consider the human brain a deterministic machine, but would be interested in a discussion on this.

    I remember seeing something that suggested that in the individual neurons quantum uncertainty played a role which made the system, at a very low level, undeterministic, but I can't remember exactly where. I'll try and dig it up.


  • Moderators, Society & Culture Moderators Posts: 25,558 Mod ✭✭✭✭Dades


    Wicknight wrote: »
    Nothing, but it won't come as a surprise. No one is going to be like "Oh, I had no idea that the AI program we put in charge of the American nuclear arsenal and that we programmed to do anything it can to survive, was going to do anything it can to survive" :P
    Well, so long as the really human-like AI intent on taking over the world remains constrained in a box on a table, and without access to killer robots or an army of nanobots we should be alright. :)


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    Dades wrote: »
    Well, so long as the really human-like AI intent on taking over the world remains constrained in a box on a table, and without access to killer robots or an army of nanobots we should be alright. :)

    That is another thing. Where the feck did Skynet get a factory to build Terminators?

    Worst.Movie.Ever :P


  • Registered Users, Registered Users 2 Posts: 26,578 ✭✭✭✭Turtwig


    Wicknight wrote: »
    That is another thing. Where the feck did Skynet get a factory to build Terminators?

    Worst.Movie.Ever :P

    It became a super virus that was unable to hack the defense networks, however it knew that in time the humans would have no choice but to use their AI program as a last resort to get the virus out of the system because Washington was growing impatient. The AI program was of course Skynet itself. The second the humans gave Skynet access to their own defense network it immediately hacked all databases in a second and took control of all automated systems. A cunning ploy.

    *Movie nerd.
    /runs.


  • Registered Users, Registered Users 2 Posts: 68,317 ✭✭✭✭seamus


    Wicknight wrote: »
    Sure, but the machine won't do things we haven't built it to do.
    Much of the research around AI is about basically developing "learning" algorithms and then setting the program loose to figure out how to do something. There's a romanticised notion that we'll develop a robot, switch it on, and it's instantly walking and talking. In reality, AI will be a long slow road where the machine has to learn things for itself, much like a human does.

    One example I saw recently enough was a bit like QWOP. A ragdoll form is displayed on the screen, released into an environment with earth-like physics and the program is left in charge of it. The program is able to do "stuff" which causes the arms and legs to move, but it doesn't have any specific knowledge of what it means to move the arm, for example. It's also told what defines success - e.g. walking, and what defines failure, e.g. falling.

    Then it's left to "learn" how to walk. When program fails, it resets itself, but holds onto the knowledge from the last iteration. They found that after a few thousand goes, the computer will learn how to make the man walk, eventually. Then they run the exercise thousands of times, and they find that while the program eventually learns to walk, it never learns in exactly the same way twice due to the random environmental variables added to the program.

    It's complicated stuff, but you can see how, if you combine a few million of these learning algorithms into a single entity, the end results are going to be practically unpredictable. And that's makes it next to impossible to insert breakpoints like "Do not kill humans" into what will be a very complicated AI.

    From a human perspective we have only handful of prime directives - eat, survive, reproduce. And yet plenty of people manage to actively refuse to do them.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 3,809 ✭✭✭CerebralCortex


    FYI anyone interested in the leading thoughts on AI and is dangers see here and here.


  • Closed Accounts Posts: 25,848 ✭✭✭✭Zombrex


    seamus wrote: »
    It's complicated stuff, but you can see how, if you combine a few million of these learning algorithms into a single entity, the end results are going to be practically unpredictable. And that's makes it next to impossible to insert breakpoints like "Do not kill humans" into what will be a very complicated AI.

    Which is why I think it very unlikely that such systems would ever be used to do anything important. Even if the government turned over defense to a computer they would want to know exactly when and why the computer would do something.

    A real system like skynet would not be build with a black box style neural network because it would be next to impossible to figure out why it was doing something.


  • Closed Accounts Posts: 1,780 ✭✭✭liamw


    seamus wrote: »
    From a human perspective we have only handful of prime directives - eat, survive, reproduce. And yet plenty of people manage to actively refuse to do them.

    Indeed :pac:

    eat-survive-reproduce.jpg


  • Registered Users, Registered Users 2 Posts: 13 GaryCanDance


    My understanding of the concept of a singularity is that it will not be an "AI" and unlike what some of the commenters are stating here and that it will be just a human intelligence stored and processed via a machine. An artificial casing for biological Operating System.

    This is the only real way to achieve human life with near immortality. Inevitable pity we wont live to see it


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 3,809 ✭✭✭CerebralCortex


    My understanding of the concept of a singularity is that it will not be an "AI" and unlike what some of the commenters are stating here and that it will be just a human intelligence stored and processed via a machine. An artificial casing for biological Operating System.

    This is the only real way to achieve human life with near immortality. Inevitable pity we wont live to see it

    Well then your understanding is limited and you'll need to do more reading.


  • Registered Users, Registered Users 2 Posts: 21,264 ✭✭✭✭Hobbes


    Well then your understanding is limited and you'll need to do more reading.

    He is partly correct, but the real term is that you reach a point in technology that surpasses what we currently have, that it becomes almost impossible to determine what comes next.

    As someone pointed out earlier in the thread this has happened through out history.

    Personally I think this has happened to some extent. I mean if you told someone 20 years ago that we would use a complex computer network system to share pictures of lolcats they would of thought you were crazy.


  • Banned (with Prison Access) Posts: 2,449 ✭✭✭SuperInfinity


    Dave! wrote: »
    PZ Myers has no time for Ray Kurzweil, and has written some rather vitriolic stuff about him on Pharyngula. Made me realise what a cúnt PZ is, and I stopped following his blog from then.

    I've never watched or read anything about Kurzweil, so can't really comment on it.

    You should. You might become a great fan of PZ Myers after you hear some of the nonsense Ray Kurzweil comes up with.


  • Registered Users, Registered Users 2 Posts: 3,809 ✭✭✭CerebralCortex


    You should. You might become a great fan of PZ Myers after you hear some of the nonsense Ray Kurzweil comes up with.

    Despite this Kurzweil has more of my respect than Myers especially after Myer's tirade.


  • Registered Users, Registered Users 2 Posts: 2,135 ✭✭✭POINTBREAK


    IBM's Super Computer Computer destroys humans.
    -
    Its about the size of 80 Fridges at the minute, but with Moores Law in about 25 years you will probably have it on your phone.


  • Registered Users, Registered Users 2 Posts: 3,809 ✭✭✭CerebralCortex


    POINTBREAK wrote: »
    IBM's Super Computer Computer destroys humans.
    -
    Its about the size of 80 Fridges at the minute, but with Moores Law in about 25 years you will probably have it on your phone.

    How did you calculate that probability? IMHO it's just too uncertain to determine the nature of computational developments in the next 10 years never-mind 25 years. However if it keeps following a Moores Law trend of development then interesting times ahead. For the record I side on the advancement rather than the slow down.


  • Registered Users, Registered Users 2 Posts: 2,164 ✭✭✭cavedave


    CerebralCortex

    How did you calculate that probability?

    In relation to Watson I have a bet going with some friends here that in January 2015 there is a search engine that with a set of quiz questions from one gameshow program (the weakest link, who want to be a millionaire, university challenge) and it has to get 7/10 right".

    I have not seen a prediction that these Learning by Reading programs will bring about the singularity. The level of understanding they have or even might have (excluding strong AI) is not of the scale that would result in doubling GDP every two weeks, I believe. They are cool though


  • Registered Users, Registered Users 2 Posts: 3,809 ✭✭✭CerebralCortex


    cavedave wrote: »
    In relation to Watson I have a bet going with some friends here that in January 2015 there is a search engine that with a set of quiz questions from one gameshow program (the weakest link, who want to be a millionaire, university challenge) and it has to get 7/10 right".

    I have not seen a prediction that these Learning by Reading programs will bring about the singularity. The level of understanding they have or even might have (excluding strong AI) is not of the scale that would result in doubling GDP every two weeks, I believe. They are cool though

    Doubling GDP? What has that got to do with the singularity?


  • Registered Users, Registered Users 2 Posts: 2,164 ✭✭✭cavedave


    From post #15 I argued you can think of a singularity in terms of the amount of goods and serivces we can produce.
    As Vinge said the fist singularity was when humans evolved "I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth".

    Hanson says
    "Then roughly 10,000 years ago the farming revolution happened--not exactly sure what was the key cause, but there was a sudden change. And then roughly 200 years ago there was the Industrial Revolution. The thing that distinguishes these changes from all the other thousands of other big important things that have happened in history was that at these moments the growth rate in the human economy dramatically increased in a relatively short time... instead of doubling every quarter of a million years we started doubling every thousand. Big change. Factor of 200 in growth rates, at least...
    then with the Industrial Revolution, within a very short time within a few hundred years--which is small compared to the thousand-year doubling time--we went from doubling every thousand years to, over the last century, doubling every 15 years...
    In terms of a time-scale, they were very short time-scale compared to previous time scales; and they changed the time-scale of what was happening. So if we just take those events, just do numerology, just say, let's pretend like the next thing that happened would have a growth-rate increase similar to the previous growth rate increases, and then maybe the modes, number of doublings that happened during each time are similar. If we just use those numbers to project the future... We can get a remarkably tight prediction for the new growth rate, because these increases in doubling times have in fact been remarkably consistent. And so what you get is roughly in the one- to two-week doubling time range. So, pause, let that sink in. One- to two-week doubling time in the world economy. Instead of the current 15 years. "

    So if this singularity is as much of an increase as the industrial revolution was it would be a doubling of stuff produced every two weeks. If it happened with the same decreasing time interval as there was human->farming->industrial it would be in the next 100 years.


  • Registered Users, Registered Users 2 Posts: 109 ✭✭football_lover


    Jernal wrote: »
    I'd say it's more likely we will have reverted back to some kind of dark ages (or worse!) than any sort of era where we become technologically adept.

    This is highly unlikely as too many people have knowledge of how to build and engineer systems.

    it is like saying electricity will be for gotten when 3/4 of earths population have experienced it and 10s of millions of people know how to build these systems and 10000s of people know how to construct these systems from scratch.

    And that does not take into account all the other advances that has take place.


  • Advertisement
Advertisement