Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

SETI in Trinity

  • 27-11-2005 1:22am
    #1
    Registered Users, Registered Users 2 Posts: 7,110 ✭✭✭


    Hey there,

    is anyone interested in joining the SETI programme (where you give some of your unused computing power to search for life in the universe)...

    the computers in the library surely can be configured to run that when no-ones using them right?

    I've just set up a team Trinity College Ireland on the site...


Comments

  • Registered Users, Registered Users 2 Posts: 1,945 ✭✭✭cuckoo


    Is SETI still running? I can remember that being set up years ago.

    Has anything being found 'out there'?


  • Registered Users, Registered Users 2 Posts: 887 ✭✭✭wheresthebeef


    you would need the permission of ISServices to do something like that, and i doubt it would be given.


  • Registered Users, Registered Users 2 Posts: 7,314 ✭✭✭Nietzschean


    'could' they in theory yes, will they, no. If they were to use the unused cpu cycles for anything it would be along the lines of folding@home or one of the grid computing projects i imagine...

    But they won't, as its messy to get it setup and working properly across so many machines at best...


  • Registered Users, Registered Users 2 Posts: 3,461 ✭✭✭DrIndy


    To be frank, why does TCD share a supercomputer with Belfast when they have a supercomputer of their own if they set up a parrallel processing grid with all the unused PCs at night?


  • Registered Users, Registered Users 2 Posts: 887 ✭✭✭wheresthebeef


    cos that would be a clever thing to do. If ISS has proven anything, its that everything must be needlessly complicated and involve lots of technical ho-ha and skullduggery.
    look at the fiasco that is using the wireless network. In DCU you just enter a proxy script and put in your username and password and your away (AFAIK).
    Here you have to bring your laptop for a STI Screen. They want Swabs of your HardDrive, OS. And you need Windows XP Pro. Pain in the bottom.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 3,225 ✭✭✭JackKelly


    you can avoid that by running linux ;)


  • Registered Users, Registered Users 2 Posts: 11,198 ✭✭✭✭Crash


    part of the issue with that wheresthebeef is that DCU is out in the middle of nowhere and seem less worried about people scabbing their wireless (as its pathetically easy to get on it).

    although i prefer the trinity wireless - DCU wireless is actually technically more locked down in certain areas.


  • Registered Users, Registered Users 2 Posts: 887 ✭✭✭wheresthebeef


    no excuse. they could cover campus with a giant dome made of pipe cleaners and tin foil to attenuate the wireless coverage. problem solved.

    but instead they take the hard option, making people jump through hoops just to get on to the network.


  • Registered Users, Registered Users 2 Posts: 7,314 ✭✭✭Nietzschean


    DrIndy wrote:
    To be frank, why does TCD share a supercomputer with Belfast when they have a supercomputer of their own if they set up a parrallel processing grid with all the unused PCs at night?
    ISS have nothing what so ever to do with anyone interested in supercomputing facilities in the college(except internet access via licences mostly).

    Secondly, we have 3 clusters within the college in the HPC department. The top of which is ranked 226 out of the top 500 supercomputers in the world.

    Thirdly, a cluster made from indvidual workstations around the college would be useless for most supercomputing uses as the interconnect is far too slow and the machines are prone to crashing.(HPC's new setup uses an infiniband setup, which is rated up to 80gigbit/sec.)


    Fourthly regarding the wireless network, our system gives greater access to the network so as a result needs to be locked down more. Also if that is the case in dcu thats highly(read stupidly) insecure as its quite easy then to sniff people's proxy passwords and get on the internet. ISS have a set of guidelines from the college on what security measures they must take. They are just attempting to compy with these, so its not their fault.


    And that is entirely unrelated to the issue of coverage, which is being gradually solved.


  • Registered Users, Registered Users 2 Posts: 1,880 ✭✭✭nosmo


    Here you have to bring your laptop for a STI Screen. They want Swabs of your HardDrive, OS. And you need Windows XP Pro. Pain in the bottom.
    Thanks to the measures they take, you're less likely to see a case of it, but you'd be saying the opposite if the college network was down on a day you desperately need to get research done on your machine because some yokel came in and gave everyone the latest Windows worm
    With earlier versions of windows, you can't authenticate on the wireless network, simple as that


  • Advertisement
  • Closed Accounts Posts: 287 ✭✭Thraktor


    http://www.apple.com/science/profiles/vatech2/index.html

    Virginia Tech have been quite successful in building low-cost computer networks from Macs over the past few years (most notable in 2003 when their first machine was placed second in the world supercomputers list at a cost of about 1/70th that of the first placed machine, the Earth Simulator), and I have wondered to myself why this same thing has never been tried on a campus-wide network. Obviously, even with a gigabit network serving the grid, the interconnection speed between the nodes would limit the type of high-performance computing tasks that could be run on the system compared to Inifiniband, but with XGrid built into OSX Tiger (and apparently fairly easy to set up), I'm surprised that no colleges with large numbers of Macs on campus have even tried it.

    I'd be very interested, though, to know, if ISS were to switch as far as possible to using Macs in computer rooms on campus (apart from those very specifically requiring features of the Windows OS), and some of the smaller high-performance computing projects were to donate their funding towards in it in exchange for use of the grid, whether it could in fact be used to both provide better computers for student use, as well as a powerful grid-based supercomputer for certain computing projects. The college network, of course, would have to be upgraded (to gigabit, I'd imagine, as the cost of Infiniband on that sort of scale would be stratospheric), but I'd like to see a financial study done on whether the entire switchover would be worth it.


  • Registered Users, Registered Users 2 Posts: 7,314 ✭✭✭Nietzschean


    The apple case invloved those comps being in a datacentre, unused, properly cooled, good power supply and top end switches..... Not what you will ever get from labs.

    Heck the computers around college don't even stay on when they arn't in use, they goto sleep.

    Its not worthwhile to try attempt to use a resource which is unrelyable like computer lab systems to try do most of the hpc type research going on in the college. They are being turned on and off all the time, and can have different loads put on them all the time etc... Its not good for proper hpc stuff at all.


    As regards your OSX fanboyisms there, if you look at nearly all the top supercomputer installations they all use linux, which would be free and god knows the h/w would be well cheaper than apple's stuff. But regardless, it would be cheaper to setup a new datacentre put all the machines in there than trying to get a college network to behave anything that well. There is also the issue of increased power usage that using all the machines would pose, and the increased heat output...which would require more/better aircon in all comp labs....... Even running SETI in all the labs would cause horrible temp spikes probally......


  • Registered Users, Registered Users 2 Posts: 8,452 ✭✭✭Time Magazine


    Serenity wrote:
    ISS have nothing what so ever to do with anyone interested in supercomputing facilities in the college(except internet access via licences mostly).

    Secondly, we have 3 clusters within the college in the HPC department. The top of which is ranked 226 out of the top 500 supercomputers in the world.

    Thirdly, a cluster made from indvidual workstations around the college would be useless for most supercomputing uses as the interconnect is far too slow and the machines are prone to crashing.(HPC's new setup uses an infiniband setup, which is rated up to 80gigbit/sec.)


    Fourthly regarding the wireless network, our system gives greater access to the network so as a result needs to be locked down more. Also if that is the case in dcu thats highly(read stupidly) insecure as its quite easy then to sniff people's proxy passwords and get on the internet. ISS have a set of guidelines from the college on what security measures they must take. They are just attempting to compy with these, so its not their fault.
    =
    If ISS has proven anything, its that everything must be needlessly complicated and involve lots of technical ho-ha and skullduggery.
    TBH.
    :v:


  • Registered Users, Registered Users 2 Posts: 7,314 ✭✭✭Nietzschean


    I'm not the one who brought up the potential of using lab computers as a cluster....... and given the context i would hardly call my comments needlessly complicated, but to each his own.


  • Registered Users, Registered Users 2 Posts: 8,452 ✭✭✭Time Magazine


    Ah Ian I'm only joking with you.


  • Registered Users, Registered Users 2 Posts: 7,110 ✭✭✭Thirdfox


    Just to kinda go back to the original point here is the site for SETI:
    http://setiathome.berkeley.edu/team_display.php?teamid=121231


  • Registered Users, Registered Users 2 Posts: 7,314 ✭✭✭Nietzschean


    just join one of the boards.ie teams for these things... much easier

    i'm a folding@home fan myself anyhooo
    http://fah-web.stanford.edu/cgi-bin/main.py?qtype=teampage&teamnum=39432


  • Registered Users, Registered Users 2 Posts: 427 ✭✭illumin


    You say "we" Serenity when referring to ISS, so do you work for them?


  • Registered Users, Registered Users 2 Posts: 7,314 ✭✭✭Nietzschean


    we as in trinity? i dunno how you would even guess ISS as i referred to a department with we not iss :) [and no i don't work for that deptartment either (HPC)]..........


  • Closed Accounts Posts: 287 ✭✭Thraktor


    Serenity wrote:
    The apple case invloved those comps being in a datacentre, unused, properly cooled, good power supply and top end switches..... Not what you will ever get from labs.

    Heck the computers around college don't even stay on when they arn't in use, they goto sleep.

    Its not worthwhile to try attempt to use a resource which is unrelyable like computer lab systems to try do most of the hpc type research going on in the college. They are being turned on and off all the time, and can have different loads put on them all the time etc... Its not good for proper hpc stuff at all.

    As regards your OSX fanboyisms there, if you look at nearly all the top supercomputer installations they all use linux, which would be free and god knows the h/w would be well cheaper than apple's stuff. But regardless, it would be cheaper to setup a new datacentre put all the machines in there than trying to get a college network to behave anything that well. There is also the issue of increased power usage that using all the machines would pose, and the increased heat output...which would require more/better aircon in all comp labs....... Even running SETI in all the labs would cause horrible temp spikes probally......

    You're quite right on the Linux, but my main reason for referencing Macs in specific is that, if you're running a couple hundred OSX Tiger Macs on a network, it really is as simple as flicking a switch to turn them into a grid. I was merely wondering how much computing power this could produce. Oh, and the way XGrid works is that it only uses up the computer's resources while in "screensaver mode", as in when the computer would otherwise be asleep.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 7,314 ✭✭✭Nietzschean


    see previous points on aircon/power useage...for 1000+ machines both figures cost much €€€€€.... (nevermind network issues etc..)

    but XGrid sounds fair enough, but its just as easy to leave stuff on a linux box reniced so it stays out of the way.....

    but anyway i'm just nearly sounding arguementative..


  • Registered Users, Registered Users 2 Posts: 427 ✭✭illumin


    Serenity wrote:
    we as in trinity? i dunno how you would even guess ISS as i referred to a department with we not iss :) [and no i don't work for that deptartment either (HPC)]..........

    I was just wondering cause i work for ISS and your name wasnt familiar :P


  • Registered Users, Registered Users 2 Posts: 7,314 ✭✭✭Nietzschean


    i used to do admin for one of the depts, never worked for iss though.heard most of the interesting stuff at the sysadmin meetings...


  • Registered Users, Registered Users 2 Posts: 1,945 ✭✭✭cuckoo


    At night when some of the computer rooms are locked, are the computers left on/sleeping? If so, would it use more power to turn them off at 10pm and back on at 8am than it does to leave them be?


  • Registered Users, Registered Users 2 Posts: 7,314 ✭✭✭Nietzschean


    its possible they go fully into hibernation if unused for a long length of time(which is dump ram to hd n power off)...though i don't use those machines so i don't know really.......


  • Closed Accounts Posts: 191 ✭✭vinks


    Just to clarify a few things, the networking on the new cluster in tchpc (in TCD) is actually rated at 10gbit/s per node (there are 356 nodes) with 2x288 port infiniband switches, the interswitch cabling gives a maximum throughput of 80gigabit/s and the backplane on each of the switches are capable of up to 300+ gigabit/s (I'm not too sure on the exact specifications on the backplane bandwidth but you can check it at www.voltaire.com, the switch is the ISR9288)

    and regarding to the "Apple cluster" from a few years ago, comparing it to the "Earth Simulator" is unfair, even though the performance of the NEC machine was ~35tftop/s and its cost being of the order of ~500million USD, its because of the archictecture of the machine and the fact that instead of federating the network of processors, they manufactured one massive crossbar between the (i think the number is approximately) 5000 processors. This crossbar was probably the most expensive thing in their machine.

    Where as the Apple cluster had only infiniband or possibly even gig-e at the time. The machines also differ considerably in their applications, the Apple cluster was built for bio-informatics whose need is different from the NEC machine which was built ground up as a vector machine for weather simulations.

    Also you seem to be confusing "Grid Computing" with "Task Farming", xgrid is more task farming than actual grid computing. But then again "grid computing" is a very loose term.

    Linux/OSX arent the only Operating systems that you can "grid compute" or "task farm" on, its possible to setup task farming under windows, but it isnt very good. Also task farming also introduces the problem of varying cpu speeds and memory sizes which cause havoc to the scheduler that assigns packets/jobs to each compute node. it brings up the whole area of "do you spend time tuning your scheduler to run across N nodes of different capabilities" or do you run on a capability cluster and get your results ten times faster than task farming.

    And to be honest, task farming out lots of little parts of a large problem usually only applies to problems that are embarrasingly parallel, such as stochastic simulations, parameter searching, pattern matching etc... There is a whole class of problems which require high speed interconnects which you could never run on a loosely coupled system.

    anyway that was my rant about this thread which i so nicely missed a month or two ago.


  • Registered Users, Registered Users 2 Posts: 1,945 ✭✭✭cuckoo


    :confused:

    it's english, but not as i know it... :p


  • Closed Accounts Posts: 191 ✭✭vinks


    meh! i hate small editing windows it makes it hard to write things


  • Registered Users, Registered Users 2 Posts: 4,579 ✭✭✭Pet


    Ehm..I think cuckoo's point was not about the quality of your grammar or spelling, but the subject matter..


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 1,945 ✭✭✭cuckoo


    Pet wrote:
    Ehm..I think cuckoo's point was not about the quality of your grammar or spelling, but the subject matter..

    Exactly, sorry vinks, it was more a comment as to my own complete lack of knowledge about this stuff. I could understand the seperate words, but the meaning of the sentences eludes me.


  • Registered Users, Registered Users 2 Posts: 4,003 ✭✭✭rsynnott


    DrIndy wrote:
    To be frank, why does TCD share a supercomputer with Belfast when they have a supercomputer of their own if they set up a parrallel processing grid with all the unused PCs at night?

    This type of parallel processing really is only really appropriate for tasks that can be divided into many small pieces with little communication between the processors.

    And RE putting SETI@home on library computers, computers that are doing something use more electricity. Ultimately someone would have to pay for it.


  • Closed Accounts Posts: 191 ✭✭vinks


    DrIndy wrote:
    To be frank, why does TCD share a supercomputer with Belfast when they have a supercomputer of their own if they set up a parrallel processing grid with all the unused PCs at night?

    the machine that you are talking about got decommissioned about a year or two ago and was in production for ~4yrs, that's quite old news :P we dont share our machine with QUB anymore. the funny thing about that machine was that it's peak performance isnt even worth one or two nodes on the current new cluster.


Advertisement