Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi all! We have been experiencing an issue on site where threads have been missing the latest postings. The platform host Vanilla are working on this issue. A workaround that has been used by some is to navigate back from 1 to 10+ pages to re-sync the thread and this will then show the latest posts. Thanks, Mike.
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Stick up your best shell scripts

  • 18-09-2003 9:40am
    #1
    Closed Accounts Posts: 62 ✭✭


    I've been messing about with shell scripts the last few days and I'm fairly ****e but hey...

    Anyway, why don't you stick up your best shell scripts that make monotonous tasks quick, dead handy scripts, whatever...

    If it isn't included in linux and you or a friend wrote it stick it up and share the wealth...


«1

Comments

  • Closed Accounts Posts: 157 ✭✭BenH


    This might be of some interest, especially if your a redhat fan:

    http://night-shade.org.uk/basic-configs/


  • Registered Users, Registered Users 2 Posts: 2,518 ✭✭✭Hecate


    I wrote this to automatically update the ports tree on my freebsd box; it has a cron entry to run every saturday morning, gotta keep up to date with the latest and greatest:
    #!/bin/sh

    IFTEST=`ifconfig tun0 |grep UP`

    if [ -n "$IFTEST" ] ; then

    /usr/local/bin/cvsup -g -L 2 /root/ports-supfile

    else

    ppp -ddial nolimits
    sleep 40
    /usr/local/bin/cvsup -g -L 2 /root/ports-supfile
    killall ppp
    fi

    The box also stores a fair amount of divxs at any one time, so I have a script to search out any movies that are on the server and output them to a html page:
    #!/bin/sh

    echo "<html><h1>Movie feed .. yum</h1><br><p align=left><pre>" > /usr/local/www/data/film_listing.htm

    find / -name '*.avi' >> /usr/local/www/data/film_listing.htm
    find / -name '*.ogm' >> /usr/local/www/data/film_listing.htm

    echo "</p></pre></html>" >> /usr/local/www/data/film_listing.htm

    Thats it for now, apart from a few one liners that do boring but useful stuff like dial up with ppp and set the time on the server using ntpdate.

    I had some pretty handy ones for backing up users home directories and mysql/pgsql accounts to tapes using a combination of tar and rsync on the DIT netsoc server, but it's offline at the moment :(


  • Registered Users, Registered Users 2 Posts: 2,077 ✭✭✭parasite


    i've been googling for ways to total the amount of hours of my ppp sessions per month, but ntop etc seem like overkill, and i've been trying to think of a script to write for it, what's the best way to go about it, grep ps for pp every so often or something better ?
    :confused:


  • Closed Accounts Posts: 484 ✭✭ssh


    Get your boys out of there, this is gonna be a big one... Adds services from your current runlevel under debian...

    [PHP]
    #!/bin/sh

    echo addrc - Enable services at startup for Debian systems
    function print_current_services
    {
    \ls /etc/rc$rl.d/ |
    while read line; do
    order=$(echo $line | cut -b 2,3)
    echo $order: $(echo $line | sed -e 's/S[0-9]*//g')
    done
    }

    rl=$(runlevel | cut -d " " -f 2)
    service=XX

    while [ $service == XX ]; do
    echo -n "Enter the name of the service [Hit return to see a list]: "
    read service

    tmpvar=XX$service

    if [ $tmpvar == XX ]; then
    service=XX
    echo
    \ls /etc/init.d/
    echo
    fi

    \ls /etc/init.d/ | egrep -q ^$service$
    if [ $? != 0 ] && [ $tmpvar != XX ]; then
    echo Service does not exist
    service=XX
    fi
    done

    echo -n Checking to see if $service is already configured to run...
    \ls /etc/rc$rl.d | sed -e 's/S[0-9]*//g' | grep -q ^$service$ > /dev/null
    if [ $? == 0 ]; then
    echo Failed - already there
    exit
    fi
    echo passed

    echo "Here is a list of services and the order in which they start: "
    echo
    print_current_services

    sorder=XX
    while [ $sorder == XX ]; do
    echo
    echo -n "Enter the order you want the service to start in (10-99: "
    read sorder
    echo $sorder | egrep -q ^[0-9][0-9]$
    if [ $? != 0 ]; then
    echo Enter a number between 10 and 99
    sorder=XX
    fi
    done

    echo -n Adding service to list...
    ln -s /etc/init.d/$service /etc/rc$rl.d/S$sorder$service

    echo Checking to see if the service is listed for shutdown...
    \ls /etc/rc1.d/ | sed -e 's/K[0-9]*//g' | egrep -q ^$service$
    if [ $? != 0 ]; then
    ln -s /etc/init.d/$service /etc/rc1.d/K$sorder$service
    fi
    echo done

    echo "Here's how things look now:"

    print_current_services
    [/PHP]


  • Registered Users, Registered Users 2 Posts: 1,862 ✭✭✭flamegrill


    here's a vhost adding script I fired togeter, its nasty but it works. Basically it adds the proper folders etc for the user/site in question. It prints the httpd.conf entry to screen and you can do what you like with that.

    I also wrote another version that does some fancy log work with cronolog. I'll leave that for another day.

    I've got a nice bit of c that I'll paste in a bit to check /var/log/messages for time online :)


    [PHP]

    #!/usr/bin/php -q
    <?

    if (isset($argv[1]))
    {

    $domain = $argv[1];
    $user = $argv[2];
    if (isset($argv[3]))
    {
    $vhostip = $argv[3];
    }
    else
    {
    $vhostip = "217.114.170.92";
    }
    $vhost = "
    <VirtualHost $vhostip>
    ServerName www.$domain
    ServerAdmin $user@$domain
    DocumentRoot /home/$user/$domain/web
    ServerAlias $domain
    DirectoryIndex index.html index.htm index.php index.php4 index.php3 index.shtml index.cgi index.pl
    ScriptAlias /cgi-bin/ /home/$user/$domain/cgi-bin/
    AddHandler cgi-script .cgi
    AddHandler cgi-script .pl
    ErrorLog /home/$user/$domain/logs/error_log
    CustomLog /home/$user/$domain/logs/access_log combined
    AddType application/x-httpd-php .php .php4 .php3
    AddType text/html .shtml
    AddHandler server-parsed .shtml
    #ErrorDocument 400 /error/invalidSyntax.html
    #ErrorDocument 401 /error/authorizationRequired.html
    #ErrorDocument 403 /error/forbidden.html
    #ErrorDocument 404 /error/fileNotFound.html
    #ErrorDocument 405 /error/methodNotAllowed.html
    #ErrorDocument 500 /error/internalServerError.html
    #ErrorDocument 503 /error/overloaded.htm
    </VirtualHost>
    ";

    print $vhost;

    // nasty shell code follows to make directories to keep apache happy :)
    $shellcode = "mkdir -p /home/$user/$domain/web; mkdir -p /home/$user/$domain/cgi-bin/; mkdir -p /home/$user/$domain/logs/ ";
    #passthru($shellcode);
    }

    else
    {
    print "Useage is as follows:\n\n";
    print "vhostadd.php <domainname> <user> optionally give vhost-ip (without the <>)\n";

    }
    ?>


    [/php]


  • Advertisement
  • Banned (with Prison Access) Posts: 16,659 ✭✭✭✭dahamsta


    [EDIT: Ah, because of the domain. Tool. Never mind.]

    You can lose the multiple mkdir's, it accepts multiple directories.
    mkdir -p /home/$user/$domain/web /home/$user/$domain/cgi-bin /home/$user/$domain/logs
    
    HTH,
    adam


  • Registered Users, Registered Users 2 Posts: 1,862 ✭✭✭flamegrill


    Originally posted by dahamsta
    [EDIT: Ah, because of the domain. Tool. Never mind.]

    You can lose the multiple mkdir's, it accepts multiple directories.

    mkdir -p /home/$user/$domain/web /home/$user/$domain/cgi-bin /home/$user/$domain/logs
    
    HTH,
    adam

    Ah yes indeedy, a less evil shell argument :)

    Still looking for a powercable for the old box with the pppd timer thing. :)


  • Registered Users, Registered Users 2 Posts: 1,862 ✭✭✭flamegrill


    This isn't 100% accurate, but it will be good up to 1-2 hours of your actual usage. Saved me heaps on UTV in the later months :)


    [PHP]

    /*
    program to parse /var/log/messages for pppd dialup information
    by Paul Kelly (paul@dahomelands.net) 2002
    */



    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>

    #define DATELEN 16



    typedef struct ppp_stat_st {
    struct {
    unsigned long connects, disconnects;
    unsigned long connected_secs;
    unsigned long sentk, rcvdk;
    } total;

    struct {
    unsigned long hup, term;
    } reasons;

    struct {
    unsigned long connected_secs;
    unsigned long sentk, rcvdk;
    } best;
    } pstat;




    void pppd_log_stat(pstat *ps, FILE *fp) {
    char buf[BUFSIZ], *p, *q;
    unsigned long l;


    while (fgets(buf, sizeof buf, fp)) {
    buf[sizeof buf - 1] = 0;

    if (! (p = strstr(buf + DATELEN, "pppd"))) /* not a pppd logline */
    continue;

    if (strstr(buf, "LCP Echo")) /* debug mode on :[ */
    continue;

    if (strstr(p, "Connection terminated.")) {
    ps->total.disconnects ++;
    continue;
    }

    if (strstr(p, "SIGHUP")) {
    ps->reasons.hup++;
    continue;
    }

    if (strstr(p, "signal 15.")) {
    ps->reasons.term++;
    continue;
    }

    if (q = strstr(p, "Connect time")) {
    l = strtoul(q+13, 0, 10) * 60;
    ps->total.connected_secs += l;

    if (ps->best.connected_secs < l)
    ps->best.connected_secs = l;

    continue;
    }

    if (q = strstr(p, "Sent ")) {
    l = strtoul(q+5, NULL, 10) / 1024;
    ps->total.sentk += l;

    if (ps->best.sentk < l)
    ps->best.sentk = l;

    if (! (q = strstr(p, "ived")))
    continue; /* should never happen */

    l = strtoul(q+5, NULL, 10) / 1024;
    if (ps->best.rcvdk < l)
    ps->best.rcvdk = l;
    ps->total.rcvdk += l;
    continue;
    }
    }
    }



    void paul_kelly_fmt_print(FILE *fp, pstat *ps) {
    fprintf(fp,
    "PPP INTERFACE USAGE REPORT (complete)\n"
    "\n"
    "%13lu ppp sessions\n"
    "%13lu Hours online\n"
    "%13lu Kilobytes sent\n"
    "%13lu Kilobytes received\n"
    "\n"
    "Averages for this period:\n"
    "\n"
    "%13lu Hours online\n"
    "%13lu Kilobytes sent\n"
    "%13lu Kilobytes received\n"
    "\n"
    "Best records for a single session:\n"
    "\n"
    "%13lu Hours online\n"
    "%13lu Kilobytes sent\n"
    "%13lu Kilobytes received\n"
    "\n"
    "Connection termination reason counts:\n"
    "\n"
    "%13lu Modem hangup\n"
    "%13lu Received sigterm\n"
    "\n",

    ps->total.disconnects,
    ps->total.connected_secs / 60 / 60,
    ps->total.sentk,
    ps->total.rcvdk,

    ps->total.connected_secs / 60 / 60 / ps->total.disconnects,
    ps->total.sentk / ps->total.disconnects,
    ps->total.rcvdk / ps->total.disconnects,

    ps->best.connected_secs / 60 / 60,
    ps->best.sentk,
    ps->best.rcvdk,

    ps->reasons.hup,
    ps->reasons.term
    );
    }



    int main(int ac, char **av) {
    pstat ps;
    char *filename = "/var/log/messages";
    FILE *fp;

    if (ac > 1)
    filename = av[1];

    if (! (fp = fopen(filename, "r"))) {
    perror(filename);
    return EXIT_FAILURE;
    }

    memset(&ps, 0, sizeof ps);
    pppd_log_stat(&ps, fp);

    fclose(fp);

    paul_kelly_fmt_print(stdout, &ps);
    return EXIT_SUCCESS;
    }

    [/PHP]

    If it won't copy/paste compile from here, i'll email people the code. :-)

    Currently writing a few bits n bobs for handy administration :), will post them if/when they ever get completed.


  • Closed Accounts Posts: 62 ✭✭jerk


    nice one lads, the vhost one will be dead handy.


  • Closed Accounts Posts: 484 ✭✭ssh


    Analyses squid logs... needs python-dns too. A little inaccurate also because it should divide by 1024, not 1000...

    [php]
    #!/usr/bin/python

    import DNS
    import re
    import sys

    DNS.ParseResolvConf()
    requests = {}
    results = {}
    hitsperhost = {}

    host = ''
    size = 0
    type = ''
    url = ''

    biggest = 0
    biggesthost = ''
    biggesturl = ''

    displayedsummary = 0

    deflog = '/var/log/squid/access.log'

    try:
    lf = sys.argv[1]
    except:
    lf = deflog

    try:
    f = open(lf , "r")
    except IOError:
    print "Could not open log file"
    sys.exit(1)

    while 1 > 0:
    line = f.readline()

    if line == '':
    break

    splitline = line.split()

    host = splitline[0]
    size = int(splitline[9])
    type = splitline[10]
    url = splitline[6]

    if displayedsummary == 0:
    print 'First access in log at ',splitline[3],splitline[4]
    displayedsummary = 1
    try:
    requests[host] = requests[host] + size
    hitsperhost[host] = hitsperhost[host] + 1
    except KeyError:
    try:
    requests[host] = size
    hitsperhost[host] = 1
    except:
    print "You don't seem to have emulate_httpd_log enabled"
    sys.exit(1)

    if size > biggest:
    biggest = size
    biggesthost = host
    biggesturl = url

    try:
    results[type] = results[type] + size
    except KeyError:
    try:
    results[type] = size
    except:
    print "You don't seem to have emulate_httpd_log enabled"
    sys.exit(1)

    try:
    print 'Last access in log at ',splitline[3],splitline[4]
    except NameError:
    print 'Log file is empty'
    sys.exit(1)

    print 'Statistics by Host:'
    for i in requests.keys():
    try:
    hostname = DNS.revlookup(i)
    except:
    hostname= i
    print (requests / 1000),'KB by',hostname,'in',hitsperhost,'requests'

    hits = 0
    misses = 0
    rehit = re.compile('.*HIT.*')

    print '\nStatistics by request result:'
    for i in results.keys():
    if rehit.match(i):
    hits = hits + results
    else:
    misses = misses + results

    print (results / 1000),'KB marked as',i

    percentagehits = float(hits) / float(misses + hits) * 100

    print 'Total Hits = ',hits / 1000,'KB'
    print 'Total Misses = ',misses / 1000,'KB'
    print '% Hits = ',percentagehits,'%'

    print 'Biggest download was',biggesturl,'by',DNS.revlookup(biggesthost),'at',biggest / 1000,'KB'
    [/php]


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 1,862 ✭✭✭flamegrill


    I've decided to make this a sticky as it will always be handy for n00bs :)

    Paul


  • Registered Users, Registered Users 2 Posts: 196 ✭✭charlieroot


    Originally posted by Hecate
    I wrote this to automatically update the ports tree on my freebsd box; it has a cron entry to run every saturday morning, gotta keep up to date with the latest and greatest:


    You might want to put a pkgdb -u or pkgdb -fu in there somewhere otherwise you'll end up with some nasty problems.

    Noel.


  • Registered Users, Registered Users 2 Posts: 1,419 ✭✭✭nadir


    erm, im kinda unsure what i can or cant put here, I have loads of small simple scripts ive thrown together but erm, im not sure if u could consider them all bash scripts, like i have a nice little xmms-irssi script that i rewrote from the irssi page with some updates and stuff like that. This might be way off topic, here is a very simple easy to follow firewall script, im not sure how secure this is but it functions fairly well against port scanners, allows internal nat, ip forwarding, masquarding, filteres tcp and udp ports, (allows UT2003, Quake3, Army Ops Recon and others). Basically it will allow outbound connections but not inbound and still allow routing from an internel network, in otherwords u should be able to use all ur fav apps, like irc(DCC), p2p, games web .etc

    #simple firewall, by nadir - 2003
    echo "loading irc modules"
    modprobe ip_conntrack_irc ports=$IRC_PORTS || SYSLOG
    modprobe ip_nat_irc
    #set vars
    UNPRIVPORTS="30000:35000" # unprivileged port range

    # Remove any existing rules from all chains
    echo "flushing previous rules"
    iptables --flush
    iptables -t nat --flush
    iptables -t mangle --flush

    echo "Starting nadirs firewall"
    echo 1 > /proc/sys/net/ipv4/ip_forward
    iptables -A INPUT -d 10.0.0.0/8 -i ppp0 -j DROP
    iptables -A INPUT -m state --state NEW,INVALID -j DROP
    iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE
    iptables -A INPUT -p tcp --syn -j DROP

    #udp and tcp ports to allow.

    iptables -I INPUT 1 -p tcp -m multiport --dport 113,20045,9201,8481,8888 -j ACCEPT

    iptables -I INPUT 1 -p udp -m multiport --dport 1716,1717,1718,8777,27900 -j ACCEPT

    #This enables masquarding
    iptables -A POSTROUTING -t nat -o eth1 -j MASQUERADE

    # activate IP-Forwarding
    echo 1 > /proc/sys/net/ipv4/ip_forward

    echo "firewall rules implemented"
    echo "
    Rules are
    "
    #list rules
    iptables -t nat -nL
    iptables --list


  • Closed Accounts Posts: 5,564 ✭✭✭Typedef


    echo "Welcome to the ftp user addition utility"
    echo "You can cancel this process at any time by"
    echo "Holding down CRTL and tapping the c key"
    echo " "
    echo "Please enter the username you'd like to create and press [ENTER]"
    read luser

    echo "User " $luser " Ok y/n?"
    read val

    if [ "$val" == "n" ]; then
    echo "Not continuing "
    exit -1
    elif [ "$val" == "N" ]; then
    echo "Not continuing "
    exit -1
    elif [ "$val" == "y" ]; then
    mkdir /usr/data/$luser
    /usr/sbin/useradd -d /usr/data/$luser -s /sbin/nologin $luser
    passwd $luser
    passwd -x 2 $luser
    /usr/sbin/usermod -G $luser,writers $luser
    /bin/chown $luser /usr/data/$luser
    /bin/chgrp writers /usr/data/$luser
    /bin/chmod g+w /usr/data/$luser
    /bin/chmod 777 /usr/data/$luser
    elif [ "$val" == "Y" ]; then
    mkdir /usr/data/$luser
    /usr/sbin/useradd -d /usr/data/$luser -s /sbin/nologin $luser
    passwd $luser
    passwd -x 2 $luser
    /usr/sbin/usermod -G $luser,writers $luser
    /bin/chown $luser /usr/data/$luser
    /bin/chgrp writers /usr/data/$luser
    /bin/chmod g+w /usr/data/$luser
    /bin/chmod 777 /usr/data/$luser
    else
    echo "You have not confirmned nor denied this input, exiting now"
    exit 0
    fi


    for file in $( ls -t *.mp3 *.ogg | head -250 | tr ' ' '_');
    do
    #scp $file root@10.5.2.113:/storage/mp3
    scp -C "`echo $file | sed 's/_/ /g'`" root@10.5.4.0:/storage/mp3
    done



    /data/pgsql-site-1.5/bin/vacuumdb -p 6543 -z stockbyte-dev;
    /data/pgsql-site-1.5/bin/pg_dump -p 6543 stockbyte-dev > /data1/site-15-db-backup/$1;
    /bin/rm /data1/site-15-db-backup/$1.bz2;
    /usr/bin/bzip2 /data1/site-15-db-backup/$1;


  • Registered Users, Registered Users 2 Posts: 1,268 ✭✭✭hostyle


    Not exactly a shell script per se, but I run it from various shells :) (For some reasn I had to stop using watch under tcsh, which I used to inform me when some of "the lads" logged in; so I wrote this to do it)


    #!/usr/bin/perl -w
    
    use strict;
    
    my (%users, @in);
    
    while (<DATA>) {
      chomp;
      $users{$_} = 0;
    }
    
    for (;;) {
      @in = split/ /,`users`;
      foreach my $n (keys %users) { 
        if ( !$users{$n} && map {/^$n$/} @in ) {
          $users{$n} = 1;
          print scalar localtime, " $n has logged in \n";
        }
        if ( $users{$n} && ! map {/^$n$/} @in ) {
          $users{$n} = 0;
          print scalar localtime, " $n has logged out \n";
        }
      }
      sleep 2;
    }
    
    __DATA__
    hostyle
    
    add
    other
    users
    here
    
    


  • Registered Users, Registered Users 2 Posts: 2,747 ✭✭✭niallb


    Nice one hostyle,
    I ended up added this pid line.
    Following the instructions in the comment allow you to automate starting and stopping the script.
    Multiple logins would make it break, but it's only an extra line or two to get around that.


    #!/usr/bin/perl -w
    # All bar one line by hostyle - boards.ie
    # the ~/.watcher.pid file is to allow you to kill the process in .bash_logout
    # Add similar lines to your rc to automatically run and close it.
    # Make a copy in your own ~/bin directory if you want to change userlist. :-)
    #
    # ## Add these to ~/.bash_profile ##
    # PATH=$PATH:~/bin
    # /usr/local/bin/watcher &

    # ## Add these to ~/.bash_logout ##
    # kill -HUP `cat ~/.watcher.pid`
    # cp -f /dev/null ~/.watcher.pid

    `echo $$ >~/.watcher.pid`;

    use strict; ...


  • Registered Users, Registered Users 2 Posts: 1,268 ✭✭✭hostyle


    Originally posted by niallb
    Multiple logins would make it break, but it's only an extra line or two to get around that.

    "watch" does exactly the same - multiple logins = multiple reports. Removing duplicates from @in should fix it I think. Some methods here.

    I prefer shorter methods (like this) though:
    # @in = split/ /,`users`;
    @in = sort keys %{ {map {$_, 1} split/ /,`users`} };
    

    <edit>never mind me. this code is unnecessary as what niallb said below is true. i need stronger drugs </edit>


  • Registered Users, Registered Users 2 Posts: 2,747 ✭✭✭niallb


    Sorry.
    To clarify, I meant multiple logins would break my automation which only remembers a single pid for watcher. :-)

    The script works intuitively - it reports first login and last logout as far as I can see, just what you need.

    NiallB


  • Registered Users, Registered Users 2 Posts: 1,419 ✭✭✭nadir


    #!/bin/bash
    
    # this scripts was pwned by nadir 11/05/2004 - [email]nadir@zion.nuigalway.ie[/email] : for the fluxbox wm.
    #
    # It requires an entry like this in your fluxbox menu, 
    # [submenu] (Proc) {PID--Proc-State--Parent-Nice-Priority-%mem}
    # [include] (~/.fluxbox/proc)
    # [end]
    #
    # ok this is a simple script to gather relevant processes from the /proc stat files - and allow you to kill a process. Id like to do this in c, and have like a full, system monitor, maybe if there are likeminded prople in the fluxbox community who think this is a good idea, please let me know, Id like to contribute.
    # its a bit spammy and not terribly efficient even though i tried to keep it as small as possible, but i do find it usefull, I hope someone else out there find some use for it too :).
    #also i havn't really bugtested this much, so dont blame me if your computer fubars
    
    #kotoba
    
    
    #please set the refresh rate here in seconds, note it will slow stuff down if you set this too low.
    
    refresh=50
    
    x=0
    while [ $x -lt 1 ]
    do
    
    sleep $refresh
    
    proclist=$(ls /proc |grep [/0-9])
    totalmem=$(cat /proc/meminfo |cut -d ":" -f 2 |sed -e 's/^[ \t]*//' -e 's/kB/ /g' -e q1)
    procs=($proclist)
    
    noprocs=${#procs[@]}
    num_elms=$(echo "$noprocs - 1" |bc) 
    lastchild="2"
    
    for ((a=1; a <= $num_elms ; a++))
    do
    
    if [ -d /proc/${procs[a]} ];
    then procstat=$(echo /proc/${procs[a]}/stat) && procstatus=$(echo /proc/${procs[a]}/status);
    else procstat="/proc/self/stat" && procstatus="/proc/self/stat";
    fi
    
    memstatus[a]=$(cat $procstatus |grep VmRSS |cut -d ":" -f 2 |sed -e 's/^[ \t]*//' -e 's/kB/ /g')  
    
    if [ ${memstatus[a]} ];
    then procmem[a]=${memstatus[a]};
    else procmem[a]="1";
    fi
    
    
    
    rawstats[a]="$(cat $procstat | awk '{ print $1, $2, $3, $4, $18, $21 }' |sed -e 's/(/ /g' -e 's/)/ /g')"
    
    child=$(echo ${rawstats[a]} |cut -d " " -f 4)
    name=$(echo ${rawstats[a]} | awk '{ print $2 }')
    mempercentage[a]=$(echo "scale = 10; ${procmem[a]} / $totalmem *100" |bc |cut -b 1-4)
    
    if [ $child -ne 1 ] || [ $name != "cat" ];
    then processinfo[a]="[exec] (${rawstats[a]} ${mempercentage[a]}) {killall -9 $name} \\n";
    fi
    
    done
    echo -e \\n ${processinfo[@]} > ~/.fluxbox/proctemp 
    cat ~/.fluxbox/proctemp |column -t > ~/.fluxbox/proc
    done
    

    screenie


  • Registered Users Posts: 7 pixelbeat


    http://www.pixelbeat.org/scripts/
    I guess it's also worth mentioning these 1 line shell scripts :-)
    http://www.pixelbeat.org/cmdline.html


  • Advertisement
  • Closed Accounts Posts: 153 ✭✭Dustpuppy


    Have a look to this.
    A simple webserver based on shell script.


    #!/bin/sh -e

    # httpd.sh - small webserver
    # Karsten Kruse 2004 www.tecneeq.de, idea from httpd.ksh www.shelldorado.com
    #
    # Start via inetd: 80 stream tcp nowait nobody /usr/local/sbin/httpd.sh

    root=${HTTP_DOC_ROOT:-/home/httpd/htdocs/}

    gettype(){
    case "$1" in
    *.xhtml|*.html|*.htm|*.XHTML|*.HTML|*.HTM) type="text/html" ;;
    esac
    echo ${type:-$(file -i "$1" | awk '{print $2}' | sed 's/;//')}
    }

    while read line ; do
    [ "$line" = "$(printf \r\n)" ] && break # end of request header
    [ -z $requestline ] && requestline="$line" && break
    done

    set -- $requestline
    doc="$root/$2"
    [ -d "$doc" ] && doc="$doc/index.html"

    if [ "$1" != GET ] ; then
    printf "HTTP/1.0 501 Not implemented\r\n\r\n501 Not implemented\r\n"
    elif expr "$doc" : '.*/\.\./.*' >/dev/null ; then
    printf "HTTP/1.0 400 Bad Request\r\n\r\n400 Bad Request\r\n"
    elif [ ! -f "$doc" ] ; then
    printf "HTTP/1.0 404 Not Found\r\n\r\n404 Not Found\r\n"
    elif [ ! -r "$doc" ] ; then
    printf "HTTP/1.0 403 Forbidden\r\n\r\n403 Forbidden\r\n"
    elif [ -r "$doc" -a -f "$doc" ] ; then
    printf "HTTP/1.0 200 OK\r\nContent-type: $(gettype "$doc")\r\n"
    printf "Content-Length: $(wc -c < "$doc")\r\n\r\n"
    exec cat "$doc"
    else
    printf "HTTP/1.0 500 Server Error\r\n\r\n500 Server Error\r\n"
    fi

    # eof


  • Closed Accounts Posts: 4,763 ✭✭✭Fenster


    Here's a couple of my one-liners to make some things easier:
    #!/bin/sh
    
    freshclam
    clamscan -r --log=/home/<usr>/clamlog /
    

    Just put in your own user name where <usr> is. It updates clamscan, runs a full scan of your HD (takes a while) and then outputs a log to your home folder. Its best run as root.
    #!/bin/sh
    
    /sbin/hdparm -d1 /dev/cdrom
    /sbin/hdparm -d1 -u1 -s180 -X66 -A1 -m16 /dev/hda
    

    This is a startup script linked into my run level ones. If you don't know your hd or hdparm, avoid this one. This will increase your hd and cd performance (dma is enabled on both) and also put my hd into full standby if I don't use it for 15 minutes.
    mount -t vfat /dev/sda1 /mnt/mem
    

    Mounts a flash drive, memory card or whatever else in /mnt/mem, as memory cards sometimes mount as several folders, which can be messy. This keeps them all in one place (need to mkdir /mnt/mem).


  • Registered Users Posts: 53 ✭✭martinoc


    :)


  • Registered Users, Registered Users 2 Posts: 1,865 ✭✭✭Syth


    I use this little script when I'm root and modifying system files:
    #! /bin/bash
    
    CURRTIME=$(date '+%Y-%m-%d-%H:%M')
    
    FILE=$1
    
    DIRECTORY=$(pwd)
    
    CHANGEDFILES_PATH="${HOME}/changedfiles"
    
    cp $FILE ${FILE}_${CURRTIME}
    echo >> ${CHANGEDFILES_PATH}
    echo ${CURRTIME} >> ${CHANGEDFILES_PATH}
    echo "${DIRECTORY}/${FILE}" >> ${CHANGEDFILES_PATH}
    
    echo "Please enter a description:"
    cat >> ${CHANGEDFILES_PATH}
    

    Call it by passing a file. You must be in the directory of that file. It'll copy that file appending the current time and date to the end. It'll ask for a description which it stores in ~/changedfiles along with the file and time.

    The advantage of storing the filenames in ~/changedfiles is that it makes it easy to go back over it to see what you had to change to get XYZ to work.


  • Registered Users, Registered Users 2 Posts: 1,268 ✭✭✭hostyle


    Syth wrote:
    I use this little script when I'm root and modifying system files:
    #! /bin/bash
    
    CURRTIME=$(date '+%Y-%m-%d-%H:%M')
    
    FILE=$1
    
    DIRECTORY=$(pwd)
    
    CHANGEDFILES_PATH="${HOME}/changedfiles"
    
    cp $FILE ${FILE}_${CURRTIME}
    echo >> ${CHANGEDFILES_PATH}
    echo ${CURRTIME} >> ${CHANGEDFILES_PATH}
    echo "${DIRECTORY}/${FILE}" >> ${CHANGEDFILES_PATH}
    
    echo "Please enter a description:"
    cat >> ${CHANGEDFILES_PATH}
    

    Call it by passing a file. You must be in the directory of that file. It'll copy that file appending the current time and date to the end. It'll ask for a description which it stores in ~/changedfiles along with the file and time.

    The advantage of storing the filenames in ~/changedfiles is that it makes it easy to go back over it to see what you had to change to get XYZ to work.

    Not to needlessly disparage you, but is that not what comments are usually designed for? Also you now need to manually call your script before or after you edit any file (you could at least invoke your chosen editor at the end) ? It seems a little pointless IMO (and that could be "just me"), but thanks for sharing all the same.


  • Closed Accounts Posts: 191 ✭✭vinks


    i dont think anyone has really shown newbies how to do for loops in a shell script so here is a little script that is handy for connecting to machines sequentially and running a command
    #!/bin/sh
    LIST=`seq 104 128`
    CMD="ps aux | grep cfservd"
    
    for i in $LIST;
    do
            echo node$i
            ssh node$i $CMD
    done
    


  • Registered Users, Registered Users 2 Posts: 1,419 ✭✭✭nadir


    vinks wrote:
    i dont think anyone has really shown newbies how to do for loops in a shell script so here is a little script that is handy for connecting to machines sequentially and running a command
    #!/bin/sh
    LIST=`seq 104 128`
    CMD="ps aux | grep cfservd"
    
    for i in $LIST;
    do
            echo node$i
            ssh node$i $CMD
    done
    

    both hostyle and myself have use of for loops, but not the 'on the fly' type usage you are using.
    for ((a=1; a <= $num_elms ; a++))
    
    is what I like for shell scripts, just cause of its c like syntax , and also
    for (;;) {
      @in = split/ /,`users`;
    
    thats pretty funky,
    but yeah whatever like, it's all good


  • Closed Accounts Posts: 191 ✭✭vinks


    well, the pro of doing it my way, is you can walk an array and operate on that array, its particularly useful for working with things that arent numbers, you might not always have nice static names/ip's to play with. i guess the con is that its filthy scripting. its just handy for quickly doing things.


  • Closed Accounts Posts: 4,763 ✭✭✭Fenster


    I've been teaching myself the basics of bash scripting, and here is my first crack at a half-decent script. Its for using clamav to scan your machine. Lots of commenting, so you can follow what's going on.

    I realise this is but one way of many to perform this task, but this one is to my ends :)
    #!/bin/bash
    
    #Ye olde clamscan script
    
    #First, lets set the variables...
    
    LOG=~/Log.txt			#This is the output log file from the scan
    QUAR=~/Quarantine		#This is the folder where we put any nasty crap we find. Kekeke
    ROOT_UID=0			#This is the root UID, for determination whether or not you're root..
    NOROOT=67			#What happens to non-root users..
    FOLDER=.			#Folder you want to scan from, default is the folder you're in and recursively from there.
    
    #First, we check if you're root...
    
    if [ "$UID" -ne "$ROOT_UID" ]
    then
    	echo "You're not root user"
    	exit $NOROOT
    else
    	echo "Please remember: This script will scan recursively from your current folder, not from /"
    fi
    
    sleep 5
    
    #Next, we either create the quarantine file, or remove any nastiness from it, from the last scan...
    
    : >> $QUAR
    
    #Thirdly, we update the virus database...
    
    echo
    
    freshclam
    
    sleep 5
    
    #And finally, ye olde virus scan...
    
    clamscan --log=$LOG --move=$QUAR -r -v -i $FOLDER 
    
    echo
    
    #Lastly, a reminder of the log and quarintine files
    
    echo "A log of this scan has been saved to $LOG and any quarantined files to $QUAR"
    

    Flame it, please


  • Advertisement
  • Closed Accounts Posts: 4,842 ✭✭✭steveland?


    Sorry to butt in... but can someone please explain how to implement these scripts?

    (sorry for the n00b reply)


  • Closed Accounts Posts: 96 ✭✭krinDar


    steveland? wrote:
    Sorry to butt in... but can someone please explain how to implement these scripts?

    Most of the scripts given here can be put into a file and run directly,
    in particular the scripts begining with:
    #!/bin/bash
    

    This line should be the first line of the script or it will not work.

    To run a file you require execute permissions on the file, which you can
    achieve by setting the mode (or permissions) as follows:
    chmod 500 file
    


  • Closed Accounts Posts: 703 ✭✭✭SolarNexus


    I use this to create .ISO backups of CDs & DVDs (doesnt work on copy protected content, unfortunately)
    #!/bin/bash
    # cd copy script
    defpath="/usr/local/luna/public/shared/"
    permission="1444"
    
    #
    echo "This script will aid in copying a CD or DVD to disk"
    echo "please enter the location where you wish to save the"
    echo "disk ISO to, excluding the .iso file extension"
    echo
    echo -n "<install path:> $defpath"
    read usrpath
    echo
    mkdir -pv $defpath$usrpath
    dd bs="32" if="/dev/cdrom" of="$defpath$usrpath.iso"
    chmod -Rv $permission $defpath$usrpath
    eject
    

    I have this run hourly, to set folder permissions (I have a shared folder availabe to all users on the windows network; without this, some files would become inacessible)
    #!/bin/bash
    
    # config
    host="luna"
    installpath="/usr/local/$host"
    
    # users
    usr[0]="public"
    usr[1]="usr1"
    usr[2]="usr2"
    usr[3]="usr3"
    usr[4]="usr4"
    
    # permissions - umask
    permit[0]="0777"
    permit[1]="0755"
    permit[2]="0755"
    permit[3]="0755"
    permit[4]="0755"
    
    function main()
    {
    	# data members
    	local i=0
    
    	while [ $i -lt 5 ]; do
    
    		chmod "${permit[$i]}" -R "$installpath/${usr[$i]}"
    		echo "${usr[$i]} - ${permit[$i]}"
    
    		# increment
    		i=$((++i))
    
    	done
    }
    
    ## main entry ##
    main
    

    I also made a script (about 4 times now...) which downloaded the steam dedicated server from valve, checked for the prerequsit ncompress, extracted and installed it (never really got it finished though); I'll post it if I do it again, it can be a real bother to do it manually.


  • Registered Users, Registered Users 2 Posts: 1,606 ✭✭✭djmarkus


    #!/bin/bash
    
    dd if=/dev/cdrom bs=1 skip=32808 count=32 > /tmp/title
    DVDTITLE=`cat /tmp/title | awk '{print $1}' `
    
    mplayer dvd://  -frames 1 -nosound -vo null  > /tmp/mpr
    
    TITLENUM=`cat /tmp/mpr | grep "titles" | awk '{print $3}'`
    echo "DVD Label: $DVDTITLE Titles: $TITLENUM"
    mkdir $DVDTITLE
    for i in `seq 1 $TITLENUM`;
    do
    	rm -f divx2pass.log frameno.avi
    	mencoder -ovc xvid  -xvidencopts pass=1:bitrate=1100 -nosound -vf pullup,pp=md dvd://$i -o /dev/null
    	mencoder -ovc xvid -xvidencopts pass=2:bitrate=1100 -nosound -vf pullup,pp=md dvd://$i -o "/tmp/tmp.avi"
    	mplayer -vo null -aid 128 -ao pcm:file="/tmp/tmp.wav" dvd://$i
    	oggenc -q 3 "/tmp/tmp.wav" "/tmp/tmp.ogg"
    	ogmmerge -o "./$DVDTITLE/Title $i.ogm" /tmp/tmp.avi /tmp/tmp.ogg
    done
    


    This is a shell script for creating ogg media files(.ogm) from DVD in 2 pass xvid video and ogg audio , it takes each title and puts it into a seperate video file in a sub directory thats equal to the DVD label.

    It uses mplayers mencoder to do the actual encoding. this takes quite a while!

    So the dependencies of this script is ogm tools and mplayer with mencoder.


  • Closed Accounts Posts: 4,842 ✭✭✭steveland?


    Finally got around to making my own...

    Well to be honest I nabbed it from someone else online but theirs had no user defined arguments, the max file size was hardcoded into the script, you couldn't execute the final command (it gave you an output of what the command should be)

    Basically it'll ask you for the max file size you want it to use, what title on the DVD it should look for, what file you want to output to and how long the film is and then gives you an mencoder command to use to rip the title into DivX. It'll then prompt whether you want it to run the mencoder command for you. Been using this for a while and it's dead handy.

    It'll save the file to /home/$USER/Videos/<thefilenameyouspecified>.avi
    [php]#!/bin/bash

    echo "Welcome to the DVD Ripper Wizard"

    echo "
    "

    echo "Please enter the maximum filesize you want your file to be (in MB)"
    read MAXFILESIZE

    echo "Please enter the Title at which the movie starts on the DVD"
    read CHAPTER

    echo "Please enter the output file you desire (without .avi)"
    read OUTPUTFILE

    echo "Please enter how long the movie is (in minutes, round to the nearest)"
    read MINUTES

    MAXSIZE=$((1000*$MAXFILESIZE))
    SECONDS=$((60*$MINUTES))
    AUDIOSIZE=$((20*$SECONDS))
    FRAMES=$(($MAXSIZE - $AUDIOSIZE))
    RATE=$((($FRAMES*8) / $SECONDS))
    FINALSIZE=$(( (($RATE * $SECONDS)/8 + $AUDIOSIZE) /1000 ))

    echo "
    "
    echo "Estimated Rate: $RATE Kb/s"
    echo "Estimated Filesize: $FINALSIZE MB"
    echo "
    "

    echo ""
    echo "The following command will be used:"
    echo "mencoder dvd://$CHAPTER -ovc lavc -lavcopts vcodec=mpeg4:vhq:vbitrate=$RATE -vop scale -zoom -xy 640 -oac mp3lame -lameopts abr:br=128 -o /home/$USER/Videos/$OUTPUTFILE.avi"

    echo ""
    echo "
    "

    echo "Do you wish to run this command now? (y/n)"

    read runnow

    if [ $runnow == y ]; then
    echo "Running now!"
    # run the command!
    mencoder dvd://$CHAPTER -ovc lavc -lavcopts vcodec=mpeg4:vhq:vbitrate=$RATE -vop scale -zoom -xy 640 -oac mp3lame -lameopts abr:br=128 -o /home/steve/Videos/$OUTPUTFILE.avi

    exit -1
    elif [ $runnow == n ]; then
    echo "Not Running"
    exit -1
    else
    echo "Command not recognised, exiting"
    exit -1
    fi[/php]


  • Closed Accounts Posts: 191 ✭✭vinks


    something i whipped up in work to run through a load of test cases in a test that i was doing, so that i could have tagged output that i could grep through to generate my gnuplot plots.

    [PHP]
    #!/bin/bash
    #PBS -N cpmd-IB
    #PBS -q parallel
    #PBS -l walltime=96:00:00,cput=6144:00:00
    #PBS -l nodes=32:ppn=2

    #
    # CPMD test case si63-120ryd, where MAXCPUTIME=93600
    #

    #
    #setup some variables
    #
    D=`date +"%Y-%m-%d--%H-%M-%S"`
    O=OUT__${D}

    #
    # location of the CPMDBIN
    #
    CPMDBIN=/usr/support/CPMDToolBox-3.9.2/bin/cpmd.x

    #
    # go to my job submission directory and then execute the program
    #
    cd $PBS_O_WORKDIR

    #
    # loop over benchmark cases (test scaling of cpmd over interconnect)
    #
    for i in 1 2 4 8 16 32 64
    do
    echo "== NODES $i ==" > ${O}-$i
    echo "==============" >> ${O}-$i
    date >> ${O}-$i 2>&1
    echo "==============" >> ${O}-$i
    mpiexec -n $i /usr/support/CPMDToolBox-3.9.2/bin/cpmd.x inp.wf.full >> ${O}-$i 2>&1
    echo "==============" >> ${O}-$i
    date >> ${O}-$i 2>&1
    echo "==============" >> ${O}-$i
    sleep 30
    done
    [/PHP]


  • Advertisement
  • Closed Accounts Posts: 7,230 ✭✭✭scojones


    Very simple script for converting image resolution.
    #!/bin/sh
    for img in `ls *.JPG`
    do
      convert -resize 1024x768 $img resized-$img
    done
    
    


  • Registered Users, Registered Users 2 Posts: 378 ✭✭sicruise


    rm -rf


  • Closed Accounts Posts: 6,151 ✭✭✭Thomas_S_Hunterson


    sicruise wrote:
    rm -rf
    I ran it but nothing works anymore, my computer wont boot. Whats it supposed to do
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    Not


  • Closed Accounts Posts: 4,842 ✭✭✭steveland?


    Now, apologies for the cross-post but thought the finished product would be suited to this little archive of scripts.

    I have it set to run every 15 minutes to randomly change my desktop.

    Thanks to angelofdeath and generalmiaow for their help with this one.

    [php]#!/bin/bash

    DIRECTORY=/home/steve/Pictures/RandomDesktops/ # the directory with your files in it
    # This directory can only contain image files!
    DIRECTORYCOUNT=`ls $DIRECTORY | wc -l` # counts the files in your chosen directory
    VARIABLE=$[($RANDOM % $DIRECTORYCOUNT) + 1] # find a random element

    if [ $VARIABLE = 0 ] ; then # can't find the 0th file, returns 2
    VARIABLE=2
    fi

    # Now load each of the files in the directory into an array
    i="1"
    for fileName in `ls $DIRECTORY`
    do
    array[$i]="$fileName"
    i=`expr $i + 1`
    done

    gconftool-2 -t str --set /desktop/gnome/background/picture_filename $DIRECTORY${array[$VARIABLE]}
    gconftool-2 -t str --set /desktop/gnome/background/picture_options "stretched"
    [/php]


  • Registered Users, Registered Users 2 Posts: 1,421 ✭✭✭Steveire


    A very simple one for a task I do often.
    #! /bin/bash
    # apup: script to upgrade system
    if [ "$1" = "-d" ]; then
      echo "dist-upgrade"
      sudo aptitude update && sudo aptitude dist-upgrade
    else
      echo "regular upgrade"
      sudo aptitude update && sudo aptitude upgrade
    fi
    
    $ apup
    regular upgrade
    Password:    
    Reading package lists... Done
    Building dependency tree... Done
    Reading extended state information
    Initializing package states... Done
    Building tag database... Done
    etc...
    
    $ apup -d
    dist-upgrade
    Password:
    Reading package lists... Done
    Building dependency tree... Done
    Reading extended state information
    Initializing package states... Done
    Building tag database... Done
    etc...
    

    steveland? Couldn't you use
    for fileName in `ls $DIRECTORY | egrep -i ".jpg|.gif|.png|.bmp|.svg"`
    
    That way you'd solve the bug of having to have only image files in the directory.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 5,238 ✭✭✭humbert


    Wrote this to keep me updated when my ip address is changed so I can connect to my home computer from elsewhere. It also reboots the router when it disconnects from the internet, which it does from time to time. It's crude but it works. I'm not sure rebooted variable is working properly. It's my first perl experience!
    #!/usr/bin/perl
    
    use Net::Telnet;
    use Net::Ping;
    
    $remote_host = 'some.remote.site';
    $local_host = 'router.local.address';
    $log_file = '/var/log/router_mon';
    
    $remote_ping = Net::Ping->new();
    
    $rebooted = 0;
    
    if(!$remote_ping->ping($remote_host)){
    
    while (!$remote_ping->ping($remote_host))
    {
       $local_ping = Net::Ping->new();
       exit 0 if !$local_ping->ping($local_host);
       $t = new Net::Telnet();
       $t->open($local_host);
       $t->print('admin');
       $t->waitfor('/Password:/');
       $t->print('my_password');
       $t->waitfor('/Belkin>/');
       $t->print('reboot');
       $t->close();
       while (!$local_ping->ping($local_host))
       {
           sleep(20);
       }
       $local_ping->close();
       $t->close;
    
       $rebooted++;
    
    }
    $remote_ping->close();
    
    }
    
    $t = new Net::Telnet();
    $t->open($local_host);
    $t->print('admin');
    $t->waitfor('/Password:/');
    $t->print('my_password');
    $t->waitfor('/Belkin>/');
    $t->print('ifconfig ppp0');
    $t->waitfor('/has_ip=/');
    $t->waitfor('/ip=/');
    ($ip_addr, $match) = $t->waitfor('/,/');
    $t->print('exit');
    $t->close;
    
    open (FH, "+< $log_file") or die;
    
    while ( $line= <FH> ) {$ip_old = $line;}
    
    chomp $ip_old;
    
    if(!($ip_old eq $ip_addr)){
       print FH "$ip_addr\n";
    
       ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
    
       $to='my_email_address@gmail.com';
       $from= 'my_email_address@gmail.com';
       $subject = sprintf("new server ip %s",$ip_addr);
       $body = sprintf("system rebooted %u times at %u:%u on the %u/%u/%u, ip address obtained was: %s.",$rebooted,$hour,$min,$mday,$mon,$year+1900,$ip_addr);
    
       open(MAIL, '|/usr/sbin/sendmail -t');
    
       print MAIL "To: $to\n";
       print MAIL "From: $from\n";
       print MAIL "Subject: $subject\n";
       print MAIL "Content-type: text/plain\n\n";
       print MAIL "$body \n\n";
    
       close(MAIL);
    }
    
    close FH;
    

    Oh cron runs it every ten mins.


  • Registered Users, Registered Users 2 Posts: 865 ✭✭✭generalmiaow


    Haha! I didn't realise humbert's was a perl script for a whole minute and got very confused. Looks like a great hack though. I have a bash script that more or less the same thing (except it wgets a php script on my site which records my IP in a databse). I must try building in the power to reboot my router (same problem, and also a telnettable linksys router). No idea whether I can send telnet commands from a bash script but I imagine it's possible


  • Registered Users, Registered Users 2 Posts: 545 ✭✭✭ravydavygravy


    Heres a small bit of shell for drawing one of those progress spinners...
    #!/bin/sh
    
    I=0
    SPIN="[=          ]"
    echo -n "DEBUG: Test running ${SPIN}"
    
    while [ $I -lt 60 ]
    do
        tput cub 13
    
        case $SPIN in
                "[=          ]") SPIN="[ =         ]";;
                "[ =         ]") SPIN="[  =        ]";;
                "[  =        ]") SPIN="[   =       ]";;
                "[   =       ]") SPIN="[    =      ]";;
                "[    =      ]") SPIN="[     =     ]";;
                "[     =     ]") SPIN="[      =    ]";;
                "[      =    ]") SPIN="[       =   ]";;
                "[       =   ]") SPIN="[        =  ]";;
                "[        =  ]") SPIN="[         = ]";;
                "[         = ]") SPIN="[          =]";;
                "[          =]") SPIN="[=          ]";;
        esac
    
        echo -n "${SPIN}"
        sleep 1
            
        I=`expr $I + 1`
    done
    
    tput cub 13
    echo "[===DONE!===]"
    


  • Registered Users Posts: 2,328 ✭✭✭gamblitis


    Right since your all sticking up scripts does anyone know a script to communicate with meteor.ie to send the free texts.I've seen it been done so i know its possible.I just cant find it anywhere.Just wondering if somebody here might have had one or know where to get one.


  • Closed Accounts Posts: 413 ✭✭sobriquet


    gamblitis wrote: »
    Right since your all sticking up scripts does anyone know a script to communicate with meteor.ie to send the free texts.I've seen it been done so i know its possible.I just cant find it anywhere.Just wondering if somebody here might have had one or know where to get one.

    http://mackers.com/projects/o2sms/

    Supports, o2, voda, meteor. It's a cracking wee script.


  • Registered Users Posts: 2,328 ✭✭✭gamblitis


    Nice one cheers


  • Closed Accounts Posts: 2,349 ✭✭✭nobodythere


    Didn't know that was working with meteor... the one on the NUIG compsoc server isn't, so I ended up writing my own one, has only basic features but is EASILY adapted to changes and to other networks.

    You can get a copy hnyah:
    http://compsoc.nuigalway.ie/~grasshopa/meteor2


  • Registered Users Posts: 142 ✭✭derby7


    Ah, shell scripting, sounds daunting, but its a very useful & powerful thing :)

    Here's one I was messing around with a long time ago, on Ubuntu 6.10, I actually haven't messed around (with scripts) since then TBH.

    Its to do with the 'FIND' command.
    The 'xdev' lets me just search the local drive/linux partition.
    This version was one of my first attempts, I did modify it to suit my needs later.

    #!/bin/sh
    echo clear
    echo "Enter 1 for Word-Find program:"
    echo "Enter 2 for File-Find program:"
    read a

    if [ "$a" -eq "1" ]
    then
    echo "Enter search WORD:"
    read w
    echo "Enter /path/file to search:"
    read p
    # insensitive search, recursive, show only file name
    grep -irl $w $p

    elif [ "$a" -eq "2" ]
    then
    echo "Enter search FILE:"
    read f
    echo "Enter /path/ to search:"
    read pa
    find $pa -xdev -type f -iname $f

    else
    echo "Wrong choice !"
    fi
    exit $?


  • Closed Accounts Posts: 580 ✭✭✭karlr42


    Here's a nice script that downloads the xkcd comic archive. It's in perl and requires wget, but I'd imagine all distros have perl, so I'm going to call it a shell script :P
    #!/usr/bin/perl
    #Downloads the xkcd archive from http://imgs.xkcd.com/comics/
    #Released for totally free use for anyone who wishes to!
    @args1 = ("wget","http://imgs.xkcd.com/comics/");
    system(@args1);    #call wget to get the html of the list
    open (FILE, "<index.html") or die($!);
    foreach(<FILE>){
        if (/<tr><td class="n">/){    #only the lines that are links to comics contain this string
            s/.*?>.*?>.*?>(.*?)<.*/$1/;    #regex strips html and leaves behind "the_comic_name.png"
            print "$_\n";
            chomp ($comic = $_);    #get rid of trailing newline character
            @args2 = ("wget","http://imgs.xkcd.com/comics/"."$comic");
            print "Obtaining $comic..\n";    
            system(@args2);        #get the actual comic
            print "[OK]\n";
        }
    }
    close FILE or die($!);
    @args3 = ("rm","index.html");    #erase the html page so script will work next time
    system(@args3);
    

    And here's a similar script to download the current xkcd comic. I guess you'd run the above, then cron the below to run every Monday, Wednesday, Thursday!
    #!/usr/bin/perl
    @args1 = ("wget","http://xkcd.com/");
    system(@args1);    #call wget to get the html
    open (FILE, "<index.html") or die($!);
    foreach(<FILE>){
        if (/<h3>Image URL/){    #find the link for hotlinking
            s/.*?: {1}(.*?)<.*/$1/;    #strip html
            chomp;
            print "Obtaining comic..";
            @args2 = ("wget","$_");
            system(@args2);
            print "[OK]\n";
        }
    }
    close FILE or die($!);
    @args3 = ("rm","index.html");    #erase the html page so script will work next time
    system(@args3);
    

    Comments are more than welcome.


  • Closed Accounts Posts: 3 gtxizang


    Not a shell script, but a shell script request. Apologies if misposted.

    I've just migrated full time to Ubuntu 8.04/ LinuxMint on my laptop as my Apple G4 tower has 'died'.

    Anyway, I have all my photos backed up in Time Machine, which is fine if you have another Mac to get them off. I don't. I found the following article on how to find an individual file (the problem is something to do with hardlinks).

    http://carsonbaker.org/2008/06/23/time-machine-restore/

    Anyway, the way that iPhoto stores the photos results in hundreds/ thousands of hard links. So I'm looking for a script to 'extract' all the photos from the Time Machine disk and copy them to the laptop.

    I'd like to leave the Time Machine intact in case I do get another Mac.

    All help appreciated.

    G.


  • Advertisement
Advertisement